metadata
dict
paper
dict
review
dict
citation_count
int64
0
0
normalized_citation_count
int64
0
0
cited_papers
listlengths
0
0
citing_papers
listlengths
0
0
{ "id": "uRYGezcGgDl", "year": null, "venue": "ECAL (2) 2009", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=uRYGezcGgDl", "arxiv_id": null, "doi": null }
{ "title": "HybrID: A Hybridization of Indirect and Direct Encodings for Evolutionary Computation", "authors": [ "Jeff Clune", "Benjamin E. Beckmann", "Robert T. Pennock", "Charles Ofria" ], "abstract": "Evolutionary algorithms typically use direct encodings, where each element of the phenotype is specified independently in the genotype. Because direct encodings have difficulty evolving modular and symmetric phenotypes, some researchers use indirect encodings, wherein one genomic element can influence multiple parts of a phenotype. We have previously shown that HyperNEAT, an indirect encoding, outperforms FT-NEAT, a direct-encoding control, on many problems, especially as the regularity of the problem increases. However, HyperNEAT is no panacea; it had difficulty accounting for irregularities in problems. In this paper, we propose a new algorithm, a Hybridized Indirect and Direct encoding (HybrID), which discovers the regularity of a problem with an indirect encoding and accounts for irregularities via a direct encoding. In three different problem domains, HybrID outperforms HyperNEAT in most situations, with performance improvements as large as 40%. Our work suggests that hybridizing indirect and direct encodings can be an effective way to improve the performance of evolutionary algorithms.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cL-1wNeElG", "year": null, "venue": "EBCCSP 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7274873/7300638/07300690.pdf", "forum_link": "https://openreview.net/forum?id=cL-1wNeElG", "arxiv_id": null, "doi": null }
{ "title": "Guaranteed ℋ2 performance in distributed event-based state estimation", "authors": [ "Michael Muehlebach", "Sebastian Trimpe" ], "abstract": "Multiple agents sporadically exchange data over a broadcast network according to an event-based protocol to observe and control a dynamic process. The synthesis problem of each agent's state estimator and event generator, which decides whether information is broadcast or not, is addressed in this paper. In particular, a previously proposed LMI-synthesis procedure guaranteeing closed-loop stability is extended to incorporate an ℋ <sub xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">2</sub> performance measure. The improved estimation performance of the extended design is illustrated in simulations of an inverted pendulum, which is stabilized by two agents.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "TGcLcDJ15U4", "year": null, "venue": "EBCCSP 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9845455/9845499/09845647.pdf", "forum_link": "https://openreview.net/forum?id=TGcLcDJ15U4", "arxiv_id": null, "doi": null }
{ "title": "SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving", "authors": [ "Sambit Mohapatra", "Thomas Mesquida", "Mona Hodaei", "Senthil Kumar Yogamani", "Heinrich Gotzig", "Patrick Maeder" ], "abstract": "Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spike-based data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, their applications in complex real-world tasks have been relatively low. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task, namely LiDAR based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of simulating spiking behavior using a pre-trained Convolutional Neural Network. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. We expect significant improvements in power efficiency when the model is implemented on neuromorphic hardware.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "WOpavVK56Hi", "year": null, "venue": "EBCCSP 2016", "pdf_link": "https://ieeexplore.ieee.org/iel7/7585873/7605073/07605266.pdf", "forum_link": "https://openreview.net/forum?id=WOpavVK56Hi", "arxiv_id": null, "doi": null }
{ "title": "Self-triggered controllers, resource sharing, and hard guarantees", "authors": [ "Amir Aminifar" ], "abstract": "Today, many control applications in embedded and cyber-physical systems are implemented on shared platforms, alongside other hard real-time or safety-critical applications. Having the resource shared among several applications, to provide hard guarantees, it is required to identify the amount of resource needed for each application. This is rather straightforward when the platform is shared among periodic control and periodic real-time applications. In the case of event-triggered and self-triggered controllers, however, the execution patterns and, in turn, the resource usage are not clear. Therefore, a major implementation challenge, when the platform is shared with self-triggered controllers, is to provide hard and efficient stability and schedulability guarantees for other applications. In this paper, we identify certain execution patterns for self-triggered controllers, using which we are able to provide hard and efficient stability guarantees for periodic control applications.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "FsOT_xnUHvV", "year": null, "venue": "EBCCSP 2016", "pdf_link": "https://ieeexplore.ieee.org/iel7/7585873/7605073/07605244.pdf", "forum_link": "https://openreview.net/forum?id=FsOT_xnUHvV", "arxiv_id": null, "doi": null }
{ "title": "ELiSeD - An event-based line segment detector", "authors": [ "Christian Brandli", "Jonas Strubel", "Susanne Keller", "Davide Scaramuzza", "Tobi Delbrück" ], "abstract": "Event-based temporal contrast vision sensors such as the Dynamic Vison Sensor (DVS) have advantages such as high dynamic range, low latency, and low power consumption. Instead of frames, these sensors produce a stream of events that encode discrete amounts of temporal contrast. Surfaces and objects with sufficient spatial contrast trigger events if they are moving relative to the sensor, which thus performs inherent edge detection. These sensors are well-suited for motion capture, but so far suitable event-based, low-level features that allow assigning events to spatial structures have been lacking. A general solution of the so-called event correspondence problem, i.e. inferring which events are caused by the motion of the same spatial feature, would allow applying these sensors in a multitude of tasks such as visual odometry or structure from motion. The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving this problem by parameterizing the event stream as a set of line segments. The event stream which is used to update these low-level features is continuous in time and has a high temporal resolution; this allows capturing even fast motions without the requirement to solve the conventional frame-to-frame motion correspondence problem. The ELiSeD feature detector and tracker runs in real-time on a laptop computer at image speeds of up to 1300 pix/s and can continuously track rotations of up to 720 deg/s. The algorithm is open-sourced in the jAER project.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "s32XStef8M", "year": null, "venue": "EBCCSP 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7274873/7300638/07300691.pdf", "forum_link": "https://openreview.net/forum?id=s32XStef8M", "arxiv_id": null, "doi": null }
{ "title": "On the choice of the event trigger in event-based estimation", "authors": [ "Sebastian Trimpe", "Marco C. Campi" ], "abstract": "In event-based state estimation, the event trigger decides whether or not a measurement is used for updating the state estimate. In a remote estimation scenario, this allows for trading off estimation performance for communication, and thus saving resources. In this paper, popular event triggers for estimation, such as send-on-delta (SoD), measurement-based triggering (MBT), variance-based triggering (VBT), and relevant sampling (RS), are compared for the scenario of a scalar linear process with Gaussian noise. First, the analysis of the information pattern underlying the triggering decision reveals a fundamental advantage of triggers employing the real-time measurement in their decision (such as MBT, RS) over those that do not (VBT). Second, numerical simulation studies support this finding and, moreover, provide a quantitative evaluation of the triggers in terms of their average estimation versus communication performance.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NK2h7gYcz1", "year": null, "venue": "EBCCSP 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9845455/9845499/09845595.pdf", "forum_link": "https://openreview.net/forum?id=NK2h7gYcz1", "arxiv_id": null, "doi": null }
{ "title": "Work in Progress: Neuromorphic Cytometry, High-throughput Event-based flow Flow-Imaging", "authors": [ "Ziyao Zhang", "Maria Sabrina Ma", "Jason Kamran Eshraghian", "Daniele Vigolo", "Ken-Tye Yong", "Omid Kavehei" ], "abstract": "Cell sorting and counting technology has been broadly adopted for medical diagnosis, cell-based therapy, and biological research. Microscopy operates with image capture that is subject to an extremely constrained field-of-view, and even slow-moving targets may undergo motion blur, ghosting, and other movement-induced artifacts, which will ultimately degrade performance in developing machine learning models to perform cell sorting, detection, and tracking. Frame-based sensors are especially susceptible to these issues, and it is highly costly to overcome them with modern but conventional CMOS sensing technologies. We provide an early demonstration of a proof-of-concept system, with the overarching goals of curating a neuromorphic imaging cytometry (NIC) dataset, multimodal analysis techniques, and associated deep-learning models. We are working towards this goal by utilising an event-based camera to perform flow-imaging cytometry to capture cells in motion and train neural networks capable of identifying their morphology (size and shape) and identities. We propose that implementing a neuromorphic sensory system or developing a new class of event-based cameras customised for this purpose with our sorting strategy will unbind the applications from the constraints of framerate and provide a cost-efficient, reproducible and high-throughput imaging mechanism. While we target this early work for cell sorting, this novel idea is the first stepping-stone towards a new type of high-throughput and automated high-content image analysis system and screening instrument.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rntbYfAx8_", "year": null, "venue": "EBCCSP 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9291256/9291337/09291353.pdf", "forum_link": "https://openreview.net/forum?id=rntbYfAx8_", "arxiv_id": null, "doi": null }
{ "title": "Discrete and Continuous Optimal Control for Energy Minimization in Real-Time Systems", "authors": [ "Bruno Gaujal", "Alain Girault", "Stéphan Plassart" ], "abstract": "This paper presents a discrete time Markov Decision Process (MDP) to compute the optimal speed scaling policy to minimize the energy consumption of a single processor executing a finite set of jobs with real-time constraints. We further show that the optimal solution is the same when speed change decisions are taken at arrival times of the jobs as well as when decisions are taken in continuous time.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "8pFgCVE_fT", "year": null, "venue": "EBCCSP 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=8pFgCVE_fT", "arxiv_id": null, "doi": null }
{ "title": "High-Throughput Asynchronous Convolutions for High-Resolution Event-Cameras", "authors": [ "Leandro de Souza Rosa", "Aiko Dinale", "Simeon Bamford", "Chiara Bartolozzi", "Arren Glover" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5DsA0G_8yG", "year": null, "venue": "EBCCSP 2019", "pdf_link": "https://ieeexplore.ieee.org/iel7/8826208/8836717/08836905.pdf", "forum_link": "https://openreview.net/forum?id=5DsA0G_8yG", "arxiv_id": null, "doi": null }
{ "title": "Tradeoff Analysis of Discrepancy-Based Adaptive Thresholding Approach", "authors": [ "Michael Lunglmayr", "Saeed Mian Qaisar", "Bernhard Alois Moser" ], "abstract": "Weyl's discrepancy measure distinguishes by its property to best-approximate isometry for threshold based sampling. We discuss how the resulting quasi-isometry motivates the design of novel adaptive thresholding approaches. Our experimental analysis on the basis of send-on-delta samples shows that a significant reduction in the number of samples can be achieved with an approximately unchanged signal-to-noise ratio.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iKpR2C0zdxE", "year": null, "venue": "EBCCSP 2019", "pdf_link": "https://ieeexplore.ieee.org/iel7/8826208/8836717/08836903.pdf", "forum_link": "https://openreview.net/forum?id=iKpR2C0zdxE", "arxiv_id": null, "doi": null }
{ "title": "An Event-Driven Approach for Time-Domain Recognition of Spoken English Letters", "authors": [ "Saeed Mian Qaisar", "S. Laskar", "Michael Lunglmayr", "Bernhard Alois Moser", "R. Abdulbaqi", "R. Banafia" ], "abstract": "This paper suggest an original approach, based on event-driven processing, for time-domain spoken English letter features extraction and classification. The idea is founded on smartly combining the event-driven signal acquisition and segmentation along with local features extraction and voting based classification for realizing an efficient and high precision solution. The incoming spoken letter is digitized with an event-driven A/D converter (EDADC). An activity selection mechanism is employed to efficiently segment the EDADC output. Later on, features of these segments are mined by performing the time-domain analysis. The recognition is done with a specifically developed voting based classifier. The classification algorithm is described. The system functionality is tested for a case study and results are presented. A 9.8 folds reduction in accumulated count of samples is achieved by the devised approach as compared to the traditional counterparts. It aptitudes a significant processing gain and efficiency increase in terms of utilization of power of the suggested approach in contrast to the counterparts. The proposed system attains an average subject dependent recognition accuracy of 92.2%. It demonstrates the potential of using the suggested solution for the realization of computationally efficient automatic speech recognition applications.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "y9fhONgsxfq", "year": null, "venue": "EBCCSP 2017", "pdf_link": "https://ieeexplore.ieee.org/iel7/8013515/8022796/08022834.pdf", "forum_link": "https://openreview.net/forum?id=y9fhONgsxfq", "arxiv_id": null, "doi": null }
{ "title": "Estimating the signal reconstruction error from threshold-based sampling without knowing the original signal", "authors": [ "Bernhard Alois Moser" ], "abstract": "The problem of estimating the accuracy of signal reconstruction from threshold-based sampling, by only taking the sampling output into account, is addressed. The approach is based on re-sampling the reconstructed signal and the application of a distance measure in the output space which satisfies the condition of quasi-isometry. The quasi-isometry property allows to estimate the reconstruction accuracy from the matching accuracy between the sign sequences resulting from sampling and the re-sampling after reconstruction. This approach is exemplified by means of leaky integrate-and-fire. It is shown that this approach can be used for parameter tuning for optimizing the reconstruction accuracy.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "05pPdxuUE9n", "year": null, "venue": "EBCCSP 2016", "pdf_link": "https://ieeexplore.ieee.org/iel7/7585873/7605073/07605276.pdf", "forum_link": "https://openreview.net/forum?id=05pPdxuUE9n", "arxiv_id": null, "doi": null }
{ "title": "On preserving metric properties of integrate-and-fire sampling", "authors": [ "Bernhard Alois Moser" ], "abstract": "The leaky integrate-and-fire model (LIF), which consists of a leaky integrator followed by a threshold-based comparator, is analyzed from a mathematical metric analysis point of view. The question is addressed whether metric properties are preserved under this non-linear operator that maps input signals to spike trains, or, synonymously, event sequences. By measuring the distance between input signals by means of Hermann Weyl's discrepancy norm and applying its discrete counterpart to measure the distance between event sequences, it is proven that LIF approximately preserves the metric. It turns out that in this setting, for arbitrarily small thresholds, LIF is an asymptotic isometry.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yFiWeMSPxMl", "year": null, "venue": "EBCCSP 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7274873/7300638/07300676.pdf", "forum_link": "https://openreview.net/forum?id=yFiWeMSPxMl", "arxiv_id": null, "doi": null }
{ "title": "Matching event sequences approach based on Weyl's discrepancy norm", "authors": [ "Bernhard Alois Moser" ], "abstract": "A novel approach for matching event sequences, that result from threshold-based sampling, is introduced. This approach relies on Hermann Weyl's discrepancy norm, which plays a central role in the context of stability analysis of threshold-based sampling. This metric is based on a maximal principle that evaluates intervals of maximal partial sums. It is shown that minimal length intervals of maximal discrepancy can be exploited, in order to efficiently cluster spikes by means of approximating step functions. In contrast to ordinary spikes, these spike clusters can not only be shifted, deleted or inserted, but also stretched and shrinked, which allows more flexibility in the matching process. A dynamic programming approach is applied in order to minimizing an energy functional of such deformation manipulations. Simulations based on integrate-and-fire sampling show its potential above all regarding robustness.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "kLUrT3XvOls", "year": null, "venue": "EBCCSP 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7274873/7300638/07300692.pdf", "forum_link": "https://openreview.net/forum?id=kLUrT3XvOls", "arxiv_id": null, "doi": null }
{ "title": "Stability of threshold-based sampling as metric problem", "authors": [ "Bernhard Alois Moser" ], "abstract": "Threshold-based sampling schemes such send-on-delta, level-crossing with hysteresis and integrate-and-fire are studied as non-linear input-output systems that map Lipschitz continuous signals to event sequences with -1 and 1 entries. By arguing that stability requires an event sequence of alternating -1 and 1 entries to be close to the zero-sequence w.r.t. the given event metric, it is shown that stability is a metric problem. By introducing the transcription operator T, which cancels subsequent events of alternating signs, a necessary criterion for stability is derived. This criterion states that a stable event metric preserves boundedness of an input signal w.r.t to the uniform norm. As a byproduct of its proof a fundamental inequality is deduced that relates the operator T with Hermann Weyl's discrepancy norm and the uniform norm of the input signal.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RRYgDdMA6jr", "year": null, "venue": "EBCCSP 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7274873/7300638/07300691.pdf", "forum_link": "https://openreview.net/forum?id=RRYgDdMA6jr", "arxiv_id": null, "doi": null }
{ "title": "On the choice of the event trigger in event-based estimation", "authors": [ "Sebastian Trimpe", "Marco C. Campi" ], "abstract": "In event-based state estimation, the event trigger decides whether or not a measurement is used for updating the state estimate. In a remote estimation scenario, this allows for trading off estimation performance for communication, and thus saving resources. In this paper, popular event triggers for estimation, such as send-on-delta (SoD), measurement-based triggering (MBT), variance-based triggering (VBT), and relevant sampling (RS), are compared for the scenario of a scalar linear process with Gaussian noise. First, the analysis of the information pattern underlying the triggering decision reveals a fundamental advantage of triggers employing the real-time measurement in their decision (such as MBT, RS) over those that do not (VBT). Second, numerical simulation studies support this finding and, moreover, provide a quantitative evaluation of the triggers in terms of their average estimation versus communication performance.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LEHglb7oqzA", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=LEHglb7oqzA", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xvSaYZN58Cc", "year": null, "venue": "ECAL 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=xvSaYZN58Cc", "arxiv_id": null, "doi": null }
{ "title": "Action and perception for spatiotemporal patterns", "authors": [ "Martin Biehl", "Daniel Polani" ], "abstract": "This is a contribution to the formalization of the concept of agents in multivariate Markov chains. Agents are commonly defined as entities that act, perceive, and are goal-directed. In a multivariate Markov chain (e.g. a cellular automaton) the transition matrix completely determines the dynamics. This seems to contradict the possibility of acting entities within such a system. Here we present definitions of actions and perceptions within multivariate Markov chains based on entitysets. Entity-sets represent a largely independent choice of a set of spatiotemporal patterns that are considered as all the entities within the Markov chain. For example, the entityset can be chosen according to operational closure conditions or complete specific integration. Importantly, the perceptionaction loop also induces an entity-set and is a multivariate Markov chain. We then show that our definition of actions leads to non-heteronomy and that of perceptions specialize to the usual concept of perception in the perception-action loop.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mT0ihMJfSV", "year": null, "venue": null, "pdf_link": "/pdf/976350d9a4c11d1885452277f96941b081959603.pdf", "forum_link": "https://openreview.net/forum?id=mT0ihMJfSV", "arxiv_id": null, "doi": null }
{ "title": "Inter-foetus Membrane Segmentation for TTTS Using Adversarial Networks", "authors": [ "A Casella", "S Moccia", "E Frontoni", "D Paladini", "E De Momi", "LS Mattos" ], "abstract": "Twin-to-Twin Transfusion Syndrome is commonly treated with minimally invasive laser surgery in fetoscopy. The inter-foetal membrane is used as a reference to find abnormal anastomoses. Membrane identification is a challenging task due to small field of view of the camera, presence of amniotic liquid, foetus movement, illumination changes and noise. This paper aims at providing automatic and fast membrane segmentation in fetoscopic images. We implemented an adversarial network consisting of two Fully-Convolutional Neural Networks. The former (the segmentor) is a segmentation network inspired by U-Net and integrated with residual blocks, whereas the latter acts as critic and is made only of the encoding path of the segmentor. A dataset of 900 images acquired in 6 surgical cases was collected and labelled to validate the proposed approach. The adversarial networks achieved a median Dice similarity coefficient of 91.91% with Inter-Quartile Range (IQR) of 4.63%, overcoming approaches based on U-Net (82.98%-IQR: 14.41%) and U-Net with residual blocks (86.13%-IQR: 13.63%). Results proved that the proposed architecture could be a valuable and robust solution to assist surgeons in providing membrane identification while performing fetoscopic surgery.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "e1srX7cgX47", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=e1srX7cgX47", "arxiv_id": null, "doi": null }
{ "title": "Model based policy tuning akin to TD(\\lambda) with model free base line.", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "plpbiLbnG7", "year": null, "venue": "Offline RL Workshop NeurIPS 2022", "pdf_link": "/pdf/ee9cb242cc57dfdab1e00c270c002e67b41aab6c.pdf", "forum_link": "https://openreview.net/forum?id=plpbiLbnG7", "arxiv_id": null, "doi": null }
{ "title": "Benchmarking Offline Reinforcement Learning Algorithms for E-Commerce Order Fraud Evaluation", "authors": [ "Soysal Degirmenci", "Chris Jones" ], "abstract": "Amazon and other e-commerce sites must employ mechanisms to protect their millions of customers from fraud, such as unauthorized use of credit cards. One such mechanism is order fraud evaluation, where systems evaluate orders for fraud risk, and either “pass” the order, or take an action to mitigate high risk. Order fraud evaluation systems typically use binary classification models that distinguish fraudulent and legitimate orders, to assess risk and take action. We seek to devise a system that considers both financial losses of fraud and long-term customer satisfaction, which may be impaired when incorrect actions are applied to legitimate customers. We propose that taking actions to optimize long-term impact can be formulated as a Reinforcement Learning (RL) problem. Standard RL methods require online interaction with an environment to learn, but this is not desirable in high-stakes applications like order fraud evaluation. Offline RL algorithms learn from logged data collected from the environment, without the need for online interaction, making them suitable for our use case. We show that offline RL methods outperform traditional binary classification solutions in SimStore, a simplified e-commerce simulation that incorporates order fraud risk. We also propose a novel approach to training offline RL policies that adds a new loss term during training, to better align policy exploration with taking correct actions.", "keywords": [ "reinforcement learning", "offline reinforcement learning", "e-commerce", "fraud", "simulation" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "unrz7tsarr1", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=unrz7tsarr1", "arxiv_id": null, "doi": null }
{ "title": "Optimal Testing and Containment Strategies for Universities in Mexico amid COVID-19✱", "authors": [ "Edwin Lock", "Francisco Javier Marmolejo Cossío", "Jakob Jonnerby", "Ninad Rajgopal", "Héctor Alonso Guzmán-Gutiérrez", "Luis Alejandro Benavides-Vázquez", "José Roberto Tello-Ayala", "Philip Lazos" ], "abstract": "This work sets out a testing and containment framework developed for reopening universities in Mexico following the lockdown due to COVID-19. We treat diagnostic testing as a resource allocation problem and develop a testing allocation mechanism and practical web application to assist educational institutions in making the most of limited testing resources. In addition to the technical results and tools, we also provide a reflection on our current experience of running a pilot of our framework within the Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM), a leading private university in Mexico, as well as on our broader experience bridging research with academic policy in the Mexican context.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SWZv2Zbvl9", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=SWZv2Zbvl9", "arxiv_id": null, "doi": null }
{ "title": "Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems", "authors": [ "A. Feder Cooper", "Karen Levy", "Christopher De Sa" ], "abstract": "Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains, which have developed policies to guide how to balance the two in conditions of uncertainty. While computer science also commonly studies accuracy-efficiency trade-offs, their policy implications remain poorly examined. Drawing on risk assessment practices in the US, we argue that, since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems. We focus our analysis on distributed machine learning systems. Understanding the policy implications in this area is particularly urgent because such systems, which include autonomous vehicles, tend to be high-stakes and safety-critical. We 1) describe how the trade-off takes shape for these systems, 2) highlight gaps between existing US risk assessment standards and what these systems require to be properly assessed, and 3) make specific calls to action to facilitate accountability when hypothetical risks concerning the accuracy-efficiency trade-off become realized as accidents in the real world. We close by discussing how such accountability mechanisms encourage more just, transparent governance aligned with public values.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UYOK8tQH7N", "year": null, "venue": "EAAMO 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=UYOK8tQH7N", "arxiv_id": null, "doi": null }
{ "title": "FairEGM: Fair Link Prediction and Recommendation via Emulated Graph Modification", "authors": [ "Sean Current", "Yuntian He", "Saket Gurukar", "Srinivasan Parthasarathy" ], "abstract": "As machine learning becomes more widely adopted across domains, it is critical that researchers and ML engineers think about the inherent biases in the data that may be perpetuated by the model. Recently, many studies have shown that such biases are also imbibed in Graph Neural Network (GNN) models if the input graph is biased, potentially to the disadvantage of underserved and underrepresented communities. In this work, we aim to mitigate the bias learned by GNNs by jointly optimizing two different loss functions: one for the task of link prediction and one for the task of demographic parity. We further implement three different techniques inspired by graph modification approaches: the Global Fairness Optimization (GFO), Constrained Fairness Optimization (CFO), and Fair Edge Weighting (FEW) models. These techniques mimic the effects of changing underlying graph structures within the GNN and offer a greater degree of interpretability over more integrated neural network methods. Our proposed models emulate microscopic or macroscopic edits to the input graph while training GNNs and learn node embeddings that are both accurate and fair under the context of link recommendations. We demonstrate the effectiveness of our approach on four real world datasets and show that we can improve the recommendation fairness by several factors at negligible cost to link prediction accuracy.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NPnlQHe6dLW", "year": null, "venue": "EAAMO 2023", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=NPnlQHe6dLW", "arxiv_id": null, "doi": null }
{ "title": "Average Envy-freeness for Indivisible Items", "authors": [ "Qishen Han", "Biaoshuai Tao", "Lirong Xia" ], "abstract": "In fair division applications, agents may have unequal entitlements reflecting their different contributions. Moreover, the contributions of agents may depend on the allocation itself. Previous fairness notions designed for agents with equal or pre-determined entitlements fail to characterize fairness in these collaborative allocation scenarios. We propose a novel fairness notion of average envy-freeness (AEF), where the envy of agents is defined on the average value of items in the bundles. Average envy-freeness provides a reasonable comparison between agents based on the items they receive and reflects their entitlements. We study the complexity of finding AEF and its relaxation, average envy-freeness up to one item (AEF-1). While deciding if an AEF allocation exists is NP-complete, an AEF-1 allocation is guaranteed to exist and can be computed in polynomial time. We also study allocation with quotas, i.e. restrictions on the sizes of the bundles. We prove that finding an AEF-1 allocation satisfying quotas is NP-hard. Nevertheless, in the instances with a fixed number of agents, we propose polynomial-time algorithms to find an AEF-1 allocation with quotas for binary valuation and an approximated AEF-1 allocation with quotas for general valuation.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "DstiBDJtHIQ", "year": null, "venue": "EAAMO 2023", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3617694.3623222", "forum_link": "https://openreview.net/forum?id=DstiBDJtHIQ", "arxiv_id": null, "doi": null }
{ "title": "Counterfactual Situation Testing: Uncovering Discrimination under Fairness given the Difference", "authors": [ "José Manuel Álvarez Colmenares", "Salvatore Ruggieri" ], "abstract": "We present counterfactual situation testing (CST), a causal data mining framework for detecting individual discrimination in a dataset of classifier decisions. CST answers the question “what would have been the model outcome had the individual, or complainant, been of a different protected status?” in an actionable and meaningful way. It extends the legally-grounded situation testing of Thanh et al. [62] by operationalizing the notion of fairness given the difference of Kohler-Hausmann [38] using counterfactual reasoning. In standard situation testing we find for each complainant similar protected and non-protected instances in the dataset; construct respectively a control and test group; and compare the groups such that a difference in decision outcomes implies a case of potential individual discrimination. In CST we avoid this idealized comparison by establishing the test group on the complainant’s counterfactual generated via the steps of abduction, action, and prediction. The counterfactual reflects how the protected attribute, when changed, affects the other seemingly neutral attributes of the complainant. Under CST we, thus, test for discrimination by comparing similar individuals within each group but dissimilar individuals across both groups for each complainant. Evaluating it on two classification scenarios, CST uncovers a greater number of cases than ST, even when the classifier is counterfactually fair.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "80yylUybFO3", "year": null, "venue": "EAAMO 2023", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3617694.3623257", "forum_link": "https://openreview.net/forum?id=80yylUybFO3", "arxiv_id": null, "doi": null }
{ "title": "The Unequal Opportunities of Large Language Models: Examining Demographic Biases in Job Recommendations by ChatGPT and LLaMA", "authors": [ "Abel Salinas", "Parth Vipul Shah", "Yuzhong Huang", "Robert McCormack", "Fred Morstatter" ], "abstract": "Warning: This paper discusses and contains content that is offensive or upsetting. Large Language Models (LLMs) have seen widespread deployment in various real-world applications. Understanding these biases is crucial to comprehend the potential downstream consequences when using LLMs to make decisions, particularly for historically disadvantaged groups. In this work, we propose a simple method for analyzing and comparing demographic bias in LLMs, through the lens of job recommendations. We demonstrate the effectiveness of our method by measuring intersectional biases within ChatGPT and LLaMA, two cutting-edge LLMs. Our experiments primarily focus on uncovering gender identity and nationality bias; however, our method can be extended to examine biases associated with any intersection of demographic identities. We identify distinct biases in both models toward various demographic identities, such as both models consistently suggesting low-paying jobs for Mexican workers or preferring to recommend secretarial roles to women. Our study highlights the importance of measuring the bias of LLMs in downstream applications to understand the potential for harm and inequitable outcomes. Our code is available at https://github.com/Abel2Code/Unequal-Opportunities-of-LLMs.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Qz0nEhKB8D", "year": null, "venue": "EAAMO 2022", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3551624.3555294", "forum_link": "https://openreview.net/forum?id=Qz0nEhKB8D", "arxiv_id": null, "doi": null }
{ "title": "AI-Competent Individuals and Laypeople Tend to Oppose Facial Analysis AI", "authors": [ "Chiara Ullstein", "Severin Engelmann", "Orestis Papakyriakopoulos", "Michel Hohendanner", "Jens Grossklags" ], "abstract": "Recent advances in computer vision analysis have led to a debate about the kinds of conclusions artificial intelligence (AI) should make about people based on their faces. Some scholars have argued for supposedly “common sense” facial inferences that can be reliably drawn from faces using AI. Other scholars have raised concerns about an automated version of “physiognomic practices” that facial analysis AI could entail. We contribute to this multidisciplinary discussion by exploring how individuals with AI competence and laypeople evaluate facial analysis AI inference-making. Ethical considerations of both groups should inform the design of ethical computer vision AI. In a two-scenario vignette study, we explore how ethical evaluations of both groups differ across a low-stake advertisement and a high-stake hiring context. Next to a statistical analysis of AI inference ratings, we apply a mixed methods approach to evaluate the justification themes identified by a qualitative content analysis of participants’ 2768 justifications. We find that people with AI competence (N=122) and laypeople (N=122; validation N=102) share many ethical perceptions about facial analysis AI. The application context has an effect on how AI inference-making from faces is perceived. While differences in AI competence did not have an effect on inference ratings, specific differences were observable for the ethical justifications. A validation laypeople dataset confirms these results. Our work offers a participatory AI ethics approach to the ongoing policy discussions on the normative dimensions and implications of computer vision AI. Our research seeks to inform, challenge, and complement conceptual and theoretical perspectives on computer vision AI ethics.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "euc158nuut", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=euc158nuut", "arxiv_id": null, "doi": null }
{ "title": "Test-optional Policies: Overcoming Strategic Behavior and Informational Gaps", "authors": [ "Zhi Liu", "Nikhil Garg" ], "abstract": "Due to the Covid-19 pandemic, more than 500 US-based colleges and universities went “test-optional” for admissions and promised that they would not penalize applicants for not submitting test scores, part of a longer trend to rethink the role of testing in college admissions. However, it remains unclear how (and whether) a college can simultaneously use test scores for those who submit them, while not penalizing those who do not–and what that promise even means. We formalize these questions, and study how a college can overcome two challenges with optional testing: strategic applicants (when those with low test scores can pretend to not have taken the test), and informational gaps (it has more information on those who submit a test score than those who do not). We find that colleges can indeed do so, if and only if they are able to use information on who has test access and are willing to randomize admissions.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_bI0fZb1MuS", "year": null, "venue": "EAAMO 2022", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3551624.3555300", "forum_link": "https://openreview.net/forum?id=_bI0fZb1MuS", "arxiv_id": null, "doi": null }
{ "title": "Mathematically Quantifying Non-responsiveness of the 2021 Georgia Congressional Districting Plan", "authors": [ "Zhanzhan Zhao", "Cyrus Hettle", "Swati Gupta", "Jonathan Christopher Mattingly", "Dana Randall", "Gregory Joseph Herschlag" ], "abstract": "To audit political district maps for partisan gerrymandering, one may determine a baseline for the expected distribution of partisan outcomes by sampling an ensemble of maps. One approach to sampling is to use redistricting policy as a guide to precisely codify preferences between maps. Such preferences give rise to a probability distribution on the space of redistricting plans, and Metropolis-Hastings methods allow one to sample ensembles of maps from the specified distribution. Although these approaches have nice theoretical properties and have successfully detected gerrymandering in legal settings, sampling from commonly-used policy-driven distributions is often computationally difficult. As of yet, there is no algorithm that can be used off-the-shelf for checking maps under generic redistricting criteria. In this work, we mitigate the computational challenges in a Metropolized-sampling technique through a parallel tempering method combined with ReCom[11] and, for the first time, validate that such techniques are effective on these problems at the scale of statewide precinct graphs for more policy informed measures. We develop these improvements through the first case study of district plans in Georgia. Our analysis projects that any election in Georgia will reliably elect 9 Republicans and 5 Democrats under the enacted plan. This result is largely fixed even as public opinion shifts toward either party and the partisan outcome of the enacted plan does not respond to the will of the people. Only 0.12% of the ∼ 160K plans in our ensemble were similarly non-responsive.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fGhGDInvfX", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=fGhGDInvfX", "arxiv_id": null, "doi": null }
{ "title": "Preserving Diversity when Partitioning: A Geometric Approach", "authors": [ "Sebastian Perez-Salazar", "Alfredo Torrico", "Victor Verdugo" ], "abstract": "Diversity plays a crucial role in multiple contexts such as team formation, representation of minority groups and generally when allocating resources fairly. Given a community composed by individuals of different types, we study the problem of partitioning this community such that the global diversity is preserved as much as possible in each subgroup. We consider the diversity metric introduced by Simpson in his influential work that, roughly speaking, corresponds to the inverse probability that two individuals are from the same type when taken uniformly at random, with replacement, from the community of interest. We provide a novel perspective by reinterpreting this quantity in geometric terms. We characterize the instances in which the optimal partition exactly preserves the global diversity in each subgroup. When this is not possible, we provide an efficient polynomial-time algorithm that outputs an optimal partition for the problem with two types. Finally, we discuss further challenges and open questions for the problem that considers more than two types.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "1lPK14svRu", "year": null, "venue": "EAAMO 2021", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3465416.3483304", "forum_link": "https://openreview.net/forum?id=1lPK14svRu", "arxiv_id": null, "doi": null }
{ "title": "Project 412Connect: Bridging Students and Communities", "authors": [ "Alex DiChristofano", "Michael L. Hamilton", "Sera Linardi", "Mara F. McCloud" ], "abstract": "In this work, we describe some of the challenges Black-owned businesses face in the United States and specifically in the city of Pittsburgh. Taking into account local dynamics and the communicated desires of Black-owned businesses in the Pittsburgh region, we determine that university students represent an under-utilized market for these businesses. We investigate the root causes for this inefficiency and design and implement a platform, 412Connect (https://www.412connect.org/), to increase online support for Pittsburgh Black-owned businesses from students in the Pittsburgh university community. The site operates by coordinating interactions between student users and participating businesses via targeted recommendations. We describe the project from its conception, paying special attention to our motivation and design choices. These choices are aided by two simple models for badge design and recommendation systems that may be of theoretical interest. Along the way, we highlight challenges and lessons from coordinating a grassroots volunteer project working in conjunction with community partners and the opportunities and pitfalls of engaged scholarship.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "uP4sRcMyPK8", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=uP4sRcMyPK8", "arxiv_id": null, "doi": null }
{ "title": "Open Data Standard and Analysis Framework: Towards Response Equity in Local Governments", "authors": [ "Joy Hsu", "Ramya Ravichandran", "Edwin Zhang", "Christine Keung" ], "abstract": "There is an increasing need for open data in governments and systems to analyze equity at large scale. Local governments often lack the necessary technical tools to identify and tackle inequities in their communities. Moreover, these tools may not generalize across departments and cities nor be accessible to the public. To this end, we propose a system that facilitates centralized analyses of publicly available government datasets through 1) a US Census-linked API, 2) an equity analysis playbook, and 3) an open data standard to regulate data intake and support equitable policymaking.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xn-VuNHWFnq", "year": null, "venue": "EAAMO 2021", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3465416.3483298", "forum_link": "https://openreview.net/forum?id=xn-VuNHWFnq", "arxiv_id": null, "doi": null }
{ "title": "The Stereotyping Problem in Collaboratively Filtered Recommender Systems", "authors": [ "Wenshuo Guo", "Karl Krauth", "Michael I. Jordan", "Nikhil Garg" ], "abstract": "Recommender systems play a crucial role in mediating our access to online information. We show that such algorithms induce a particular kind of stereotyping: if preferences for a set of items are anti-correlated in the general user population, then those items may not be recommended together to a user, regardless of that user’s preferences and rating history. First, we introduce a notion of joint accessibility, which measures the extent to which a set of items can jointly be accessed by users. We then study joint accessibility under the standard factorization-based collaborative filtering framework, and provide theoretical necessary and sufficient conditions when joint accessibility is violated. Moreover, we show that these conditions can easily be violated when the users are represented by a single feature vector. To improve joint accessibility, we further propose an alternative modelling fix, which is designed to capture the diverse multiple interests of each user using a multi-vector representation. We conduct extensive experiments on real and simulated datasets, demonstrating the stereotyping problem with standard single-vector matrix factorization models.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "on8jLyT7ztl", "year": null, "venue": "EAAMO 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=on8jLyT7ztl", "arxiv_id": null, "doi": null }
{ "title": "Disaggregated Interventions to Reduce Inequality", "authors": [ "Lucius Bynum", "Joshua R. Loftus", "Julia Stoyanovich" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oe9frJCFsej", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=oe9frJCFsej", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iXLgwUX2AIt", "year": null, "venue": "e-Science 2005", "pdf_link": "https://ieeexplore.ieee.org/iel5/10501/33262/01572202.pdf", "forum_link": "https://openreview.net/forum?id=iXLgwUX2AIt", "arxiv_id": null, "doi": null }
{ "title": "Putting Semantics into e-Science and Grids", "authors": [ "Carole A. Goble" ], "abstract": "What is the semantic grid? How can e-Science benefit from the technologies of the semantic grid? Can we build a semantic Web for e-Science? Would that differ from a semantic grid? Given our past experiences with scientists, grid developers and semantic Web researchers, what are the prospects, and pitfalls, of putting semantics into e-Science applications and grid infrastructure?", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "COIaQ6I6pGO", "year": null, "venue": "CADE 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=COIaQ6I6pGO", "arxiv_id": null, "doi": null }
{ "title": "System Description: E.T. 0.1", "authors": [ "Cezary Kaliszyk", "Stephan Schulz", "Josef Urban", "Jirí Vyskocil" ], "abstract": "E.T. 0.1 is a meta-system specialized for theorem proving over large first-order theories containing thousands of axioms. Its design is motivated by the recent theorem proving experiments over the Mizar, Flyspeck and Isabelle data-sets. Unlike other approaches, E.T. does not learn from related proofs, but assumes a situation where previous proofs are not available or hard to get. Instead, E.T. uses several layers of complementary methods and tools with different speed and precision that ultimately select small sets of the most promising axioms for a given conjecture. Such filtered problems are then passed to E, running a large number of suitable automatically invented theorem-proving strategies. On the large-theory Mizar problems, E.T. considerably outperforms E, Vampire, and any other prover that does not learn from related proofs. As a general ATP, E.T. improved over the performance of unmodified E in the combined FOF division of CASC 2014 by 6 %.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JafPR3VQpKf", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=JafPR3VQpKf", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "jTbsklO4eQ", "year": null, "venue": "ECAI 2016", "pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1818", "forum_link": "https://openreview.net/forum?id=jTbsklO4eQ", "arxiv_id": null, "doi": null }
{ "title": "Planning Tourist Agendas for Different Travel Styles", "authors": [ "Jesús Ibáñez-Ruiz", "Laura Sebastia", "Eva Onaindia" ], "abstract": "This paper describes e-Tourism2.0, a web-based recommendation and planning system for tourism activities that takes into account the preferences that define the travel style of the user. e-Tourism2.0 features a recommender system with access to various web services in order to obtain updated information about locations, monuments, opening hours, or transportation modes. The planning system of e-Tourism2.0 models the taste and travel style preferences of the user and creates a planning problem which is later solved by a planner, returning a personalized plan (agenda) for the tourist. e-Tourism2.0 contributes with a special module that calculates the recommendable duration of a visit for a user and the modeling of preferences into a planning problem.", "keywords": [], "raw_extracted_content": "Planning Tourist Agendas for Different Travel Styles\nJesus Iba˜ nez andLaura Sebastia andEva Onaindia1\nAbstract. This paper describes e-Tourism2.0, a web-based\nrecommendation and planning system for tourism activities\nthat takes into account the preferences that define the travelstyle of the user. e-Tourism2.0 features a recommender sys-\ntem with access to various web services in order to obtainupdated information about locations, monuments, opening\nhours, or transportation modes. The planning system of e-\nTourism2.0 models the taste and travel style preferences of\nthe user and creates a planning problem which is later solved\nby a planner, returning a personalized plan (agenda) for thetourist. e-Tourism2.0 contributes with a special module that\ncalculates the recommendable duration of a visit for a userand the modeling of preferences into a planning problem.\n1 INTRODUCTION\nARecommender System (RS)[13]isapersonalizationtool\naimed to provide the items that best fit the individual tastesof people. A RS infers the user preferences by analyzing theavailable user data, information of other users and of the en-vironment. The target of the extensively popularized TourismRSs (TRSs) is to match the user preferences with the leisureresources and tourist activities of a city [15] by using someinitial data, usually explicitly provided by the user. The rel-\nevance of TRSs relies in their capacity of automatically in-\nferring the user preferences, through an explicit or implicitfeedback of the user, as well as providing the user with apersonal tourist activity agenda. Typically, TRSs use a hy-brid approach of recommendation techniques such as demo-graphic, content-based or collaborative filtering [2] and theyare confined to recommendations within a delimited area orcity since tourism infrastructure is usually developed to pro-mote the tourism demand in particular spots [6, 11].\nThe latest developments in TRSs share a common main-\nstream, that of providing the most user-tailored tourist pro-posal. Hence, some tools like SAMAP [3] elicits a tourist\nplan with recommendations about the transportation mode,restaurants and bars or leisure attractions such as cinemas\nor theaters, all this accompanied with a detailed plan ex-\nplanation. Scheduled routes presented in a map along with\na timetable are nowadays a common functionality of many\nTRSs, like e-Tourism [6], which also include context informa-\ntion such as the opening and closing hours of the Points OfInterest (POIs) to visit and the geographical distances be-tween POIs to compute the time to move from one placeto another. Some other tools allow the user to interact withthe plan or develop interfaces specifically designed to be used\n1Universitat Polit` ecnica de Val` encia, Valencia, Spain, Email:\n{jeibrui, lstarin, onaindia }@dsic.upv.esin mobile devices ([12, 16]). Personalization is interpreted inCT-Planner [9] as emphasizing the concept of interactive as-\nsistance between the user and a tour advisor, where the advi-sor offers several plans, learns the tourist preferences, requestsfeedback from the users and customizes the plans accordingly.CT-Planner also accounts for user preferences like the walk-\ning speed or reluctance to walk, in which case the planner willsuggest short walking distances in the plan.\nRecent advances in TRSs go one step ahead towards per-\nsonalization and propose to adapt the duration of the visits\nto the user preferences. For instance, PersTour [10] calcu-\nlates a personalized duration of a visit to POI using the POI\npopularity and the user interest preferences, which are au-tomatically derived from real-life travel sequences based ongeotagged photos. And the work in [14] considers user pref-erences based on the the number of days of the trip and thepace of the tour, that is, whether the user wants to performmany activities in one day or travel at a more relaxed pace.\nIn this paper, we present e-Tourism2.0, a TRS that draws\nupon the recommendation model and planning module of e-\nTourism [6] and significantly enhances the personalization of\nthe recommendations. e-Tourism2.0 improves e-Tourism in\ntwo main aspects:\n•context-aware tool : it establishes a connection to several\nweb services to capture up-to-date context informationsuch as opening hours of POIs to visit, location of POIs,ratings of users, modes of transport in the city, etc.;\n•preference temporal planning : it handles a full range of user\npreferences such as the user interest in visiting a POI, thepace of the tour (relaxed vs busy) and variable durations ofthe visits within a temporal interval; all these preferencesrepresent the user travel style. e-Tourism2.0 uses OPTIC\n[1], a state-of-the-art planner that addresses the full set ofpreferences defined in PDDL3.0 language [7].\nThis paper is organized as follows. Section 2 summarizes\nthe main aspects of e-Tourism. Section 3 explains the pro-\ncedure to calculate the recommended duration of an activityfor a given user and section 4 details the construction of the\nplanning problem and the encoding of the user preferences\nwithin the planning problem. Section 5 shows several cases ofstudy to test whether the defined preferences are taken intoaccount correctly by the planner and last section concludes.\n2e-Tourism2.0 TOOL\ne-Tourism [6] was developed as a web application to generate\nrecommendations about personalized tourist tours in the cityECAI 2016\nG.A. Kaminka et al. (Eds.)\n© 2016 The Authors and IOS Press.\nThis article is published online with Open Access by IOS Press and distributed under the terms\nof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-18181818\nFigure 1. e-Tourism2.0 Architecture\nof Valencia (Spain). It was intended to be a service for foreign-\nersandlocalstobecomedeeplyfamiliarwiththecityandplanleisure activities. e-Tourism makes recommendations based\non the user’s tastes, her demographic classification, the placesvisitedbytheuserinformertripsand,finally,hercurrentvisitpreferences. One of the main components of e-Tourism is the\nplanning module, aimed at scheduling the recommended ac-tivities. Thus, the output of e-Tourism is a real agenda of\nactivities which not only reflects the user’s tastes but alsoprovides details on when to perform the recommended activ-ities. Specifically, the construction of the agenda takes intoaccount duration of the activities to perform, the openinghours of the places to visit and the geographical distances be-tween places (time to move from one place to another). Allthis information is compiled into a planning problem that canbe formulated as a Constraint Satisfaction Problem or as an\nAutomated Planning Problem [8].\nLetustakethreetourists,Rose,MarkandDavid,interested\ninvisitingValencia.RoseandMarklikevisitingmuseums,butRose likes museums more than Mark. Both decide to visittheNational Museum of Ceramics. Rose wishes to visit the\nmuseum for 2h30min, whereas Mark only wants to be therefor about 1h30min. Moreover, since this is Mark’s first timein Valencia, he would like to include quite a few POIs in hisagenda, namely 5 POIs, and not to have much spare timebetween activities. Rose, however, visited Valencia last yearand she would like to explore in depth two museums thatshe already visited last time. Therefore, she would like heragenda to contain only these two visits over a full day andno much free time between them. In contrast, David has beenin Valencia several times and he would rather include in hisagendatwoorthreequickvisitsandsparetimetowalkaroundand sit in a terrace to have a beer. These three examples showdifferent travel styles around two preferences: the number ofvisits and the time spent in each visit. e-Tourism2.0 handles\ntaste preferences of the user as well as this new type of travel\nstyle preferences.\nThe e-Tourism2.0 architecture is composed of five subsys-\ntems (Figure 1): the control node, responsible of coordinatingthe whole recommendation-planning process, the web appli-cation, the recommender system, the intelligent planner andthe database.\nFigure 2. e-Tourism2.0 system: agenda preferences.\n2.1 Tourist agenda\nWe developed a new web-based interface which can be ac-cessed through different devices such as computers, smart-phones, tablets, etc. The first step in the construction of thetourist agenda is to build the user model . The user regis-\nters in the system and enters her personal details and generalpreferences. With this information the system builds an ini-tial user profile. Besides, each time the user enters the systemfor a new visit she will be requested to introduce her specificpreferences for the current visit, shown in Figure 2: the dateof the visit date, her available time slot (T\ntour\ns,Ttour\ne), the\ntime interval reserved for lunch ( Tlunch\ns,Tlunch\ne), the mode\nof transport she prefers - walking, driving or public trans-port -, her initial location location\ninitialand final destina-\ntionlocation final. Moreover, she also indicates her prefer-\nences related to her travel style: pref#visitsindicates if the\nuser prefers to include many or few visits in the tour or hasno preference over it; and pref\noccupation indicates if the user\nprefers to obtain an agenda with a high or a low temporaloccupation or has no preference over it.\nThe second step is to generate a list of activities that\nare likely of interest to the user by means of the Gen-\neralist Recommender System Kernel (GRSK) , which uses a\nmixed hybrid recommendation technique. A detailed descrip-tion of GRSK can be found in [5]. The intelligent planner is in\ncharge of calculating the tourist agenda , scheduling the\nactivitiesrecommendedbytheGRSKaccordingtotherestric-tions of the environment and user preferences with respect tothe configuration of the agenda. Figure 4 shows two agendascomputed for a particular user and a map with the path sheshould follow. When the user logs again in the system, she isasked to rate the activities in the last recommended plan\n(through the option Ratei nt h et o pb a rm e n uo fF i g u r e2 ) .\nThe information obtained from these ratings is further usedto improve the user profile and provide more suitable recom-mendations.J. Ibañez et al. / Planning Tourist Agendas for Different Travel Styles 1819\n2.2 Database\nThe database schema of e-Tourism2.0 i ss h o w ni nF i g u r e\n3. We manage two sets of tables: those used for the recom-\nmendation process and those used for the planning process.Table placesstores information about the POIs to recom-\nmend such as the name or the geographical coordinates. Ta-bleuserscontains personal details of the user, such as the\nname and other demographic data (this is neglected in Fig-ure 3 for the sake of clarity). These two tables are used in bothprocesses. The information used by the GRSK is: (1) tablespreferences, places\npreferences and users preferences,\nwhich store the characteristics of the POIs to recommend andthe user preferences inferred by the GRSK, respectively\n2;( 2 )\ntables history and history data, which store the past in-\nteraction of the user with the system. The planner uses theinformation in table timetables, which stores a list of open-\ning hours for each POI, and movements\ntime, that keeps the\nestimated and actual travelling time between two locationsaccording to the value of travel\nmode(see Figure 3).\n2.3 External data sources\nAs explained above, e-Tourism2.0 accesses various web ser-\nvices in order to obtain some up-to-date information aboutlocation of restaurants and POIs, opening hours, transporta-tion modes, etc. For obtaining this information, we selectedthe Google location and mobility web services, specifically:\n•Google Directions\n3for obtaining a route (path) between\ntwo given coordinates, addresses or name of places. It isalso possible to add some intermediate points in the pathand to select the travel mode (walking, cycling, driving orwith public transport)\n•Google Places\n4for obtaining information about a given\nplace. In e-Tourism2.0 , this service has been used to elicit\nthe opening hours of the places to visit and to find restau-rants close to an specific place.\n•Google Maps\n5for the visualization of the map along with\ntherouteprovidedtotheuserwiththerecommendedplacesto visit.\nInformation like the catalog of POIs or the route between\ntwo places is stored in the database, which allows us to accel-erate the process of calculating the recommendations and theplan. However, since information can become obsolete andneeds to be updated, Google web services are periodicallyqueried to update the data (see section 4 for more details).\n3 RECOMMENDATION OF THE VISIT\nDURATION\nThe GRSK of e-Tourism2.0 elicits the list of POIs or activ-\nities to include in the travel agenda of the user according toher preferences. This list is an ordered set of tuples of the\nform:/angbracketleftBig\na,Pr\na/angbracketrightBig\n,w h e r e adenotes the recommended activity\n2A more detailed explanation about the domain ontology can be\nfound in [4].\n3https://developers.google.com/maps/documentation/directions/\n4https://developers.google.com/places/\n5https://developers.google.com/maps/documentation/javascript/\nFigure 3. e-Tourism2.0 system: database.\nandPra∈[0,300] is the estimated degree of interest of the\nuser in activity a.\nF o re a c ha c t i v i t ya, we assign a duration in average, de-\nnoted by μa, which represents the recommendable duration\nofafor a typical tourist. The value of μajoint with σadefine\na normal distribution X(μa,σ2\na). This is used by the GRSK\nto return a time interval that encompasses the minimum and\nmaximumrecommendabledurationof afortheuseraccording\ntoPra. Following the definition of the normal distribution, σa\nis computed as μadivided by α, so that, 68% of tourists spend\n[μa−μa/α,μ a+μa/α] minutes in visiting a, whereas about\n4% of the tourists spend less than μa−2∗μa/αor more than\nμa+2∗μa/αminutes. In our experiments, we set α=5a n d\nwe empirically tested that consistent durations are returned.Our future objective is to estimate this distribution by study-ing the actual behaviour of tourists by means of an analysisof Twitter interactions, similarly to the analysis described in[10].\nOnce the normal distribution X(μ\na,σ2\na) for each activity is\ndefined, the recommended interval (dura\nmin,dura\nmax)i sc o m -\nputed as (X (Pra/300/2),X(Pra/300)). That is, the values of\nprobability that leave an area of the corresponding argumenton the right. For example, let’s assume that the a=National\nMuseum of Ceramics hasμ\na= 180 and, therefore, σa= 36,\nmeaning that a typical tourist would spend 180 minutes vis-iting this museum, and the dispersion for the other touristsis 36 minutes. Then, by the normal distribution, 68% of thetourists spend between [144,216] minutes in this visit and ap-proximately 4% of the tourists spend less than 108 or morethan 252 minutes. If the GRSK determines a degree of inter-est of 100 out of 300 for a given user, the duration intervalwill be [145 ,164], whereas if Pr\nais 260, the duration interval\nwill be [174, 220].\nIn [10], the visit duration is adjusted with the category\nof the activity aand the interest of the user in the category.\nHowever, durations in eTourism2.0 aremoreaccuratebecause\nwe consider the degree of interest of the user in a,n o ti nt h e\ncategory of a. Moreover, since the GRSK returns a tuple of\nthe form/angbracketleftBig\na,Pra,dura\nmin,dura\nmax/angbracketrightBig\nfor eacha, the planner can\nselect the most appropriate duration within the interval theaccording to the travel style preferences of the user.J. Ibañez et al. / Planning Tourist Agendas for Different Travel Styles 1820\n4 PLANNING PROBLEM SOLVING\nThe Control node receives the list of the recommended activ-\nities along with the recommended duration interval from theGRSK and generates the planning problem. Planning a set ofrecommended activities for a tourist requires some function-alities: (1) temporal planning and management of durativeactions (e.g., duration of visits, time spent in transportation,etc.); (2) ability of reasoning with temporal constraints (e.g.,scheduling the activities within the opening hours of places,planning the tour within the available time slot of the tourist,\netc.) and (3) ability of reasoning with the tourist preferences\n(e.g., selecting the preferred activities of the user for planningthe tour). Reasoning with time constraints and preferences si-multaneously is a big challenge for current temporal planners.\nAmong the few automated planners are capable of han-\ndling temporal planning problems with preferences, we optedforOPTICbecause it handles the version 3.0 of the popular\nPlanning Domain Definition Language (PDDL) [7], includingnon-fixed durations and soft goals. Soft goals are preferencesthat we wish to satisfy in order to generate a good plan, butthat do not have to be achieved in order for the plan to becorrect. We need to identify and describe the preferences inPDDL3.0 as well as stating how the satisfaction, or violation,of these constraints affects the quality of a plan. Thus, theviolation costs (penalties) associated to the preferences areconsidered at the time of selecting the best tourist plan; i.e.,\nthe plan that satisfies most tourist preferences and thereby\nminimizes the violation costs. This section describes the au-tomatic generation of the corresponding planning problem inPDDL3.0.\n4.1 Initial state\nThe specific values of the variables of a problem are describedin the initial state by means of predicates and functions. Thepredicates and functions for an activity are:\n•T h ei n t e r v a ld u r a t i o no fa na c t i o n( a c t i v i t y )a is de-\nfined through the functions (min\nvisit duration ?a) and\n(max visit duration ?a). They will be assigned the val-\nuesdura\nminanddura\nmaxreturned by the GRSK, respec-\ntively.\n•An activity ahas an opening hour and a closing hour that\narespecifiedbyatimed-initialliteral: (attopen (opena))\nand (attclose (not (open a))), to indicate when the ac-\ntivity is not longer available.\nThe duration of moving from one location pjto another\nlocation pkis defined by the function (travelling timepj\npk)that returns the time in minutes needed to travel from pj\ntopkby using the travel mode indicated by the user. If the\nduration of this action is not available in the DB from a pastuser, an estimated duration is calculated with the Haversine\nformula, used for calculating Earth distances, and the classi-caluniform linear motion formula, where speeddepends on\nthe mode of transport, and adding a small correction θfor\nawaiting times:\nEstimTime (A,B)=Haversine (A,B)\nspeed+θ∗Haversine (A,B)The predicate (person at ?l)is used to represent the lo-\ncation of the user and the function (total available time)\nreturns the available time of the user, which is initially set to\nTfinish=Ttour\ne−Ttour\ns.\nWe must note that web services are queried to obtain the\ninitial data of the planning problem and that most of thesedata (timetables, distances between monuments) are storedin the database in order to keep the number of queries as lowas possible and quickly retrieve the data during planning. In\ncase a particular distance is not found in the database during\nthe construction of a plan, we estimate the distance with the\nHaversine formula explained above, thus avoiding access to\nweb services at planning time. Estimated times will be then\nupdated after the planning process with the actual values byquerying the corresponding web services.\n4.2 Goal and preferences\nWe handle two types of goals: hard goals,t h a tr e p r e s e n t\nthe realization of an activity that the user has specifiedas mandatory (e.g., the final destination at which the userwants to finish up the tour (person\nat id hotelastoria));\nand soft goals or preferences, that represent the realiza-\ntion of a desirable but non-compulsory activity; e.g., vis-iting the National Museum of Ceramics: (preference p1\n(visit\nlocation id museumceramics)).\nThe objective is to find a plan that achieves all the hard\ngoals while minimizing a plan metric to maximize the prefer-ence satisfaction. This is expressed in the form of penalties,so that when a preference is not fulfilled, a penalty is addedto the metric. Specifically, we define three types of penalties:\nfor non-visited POIs, travelling times and the non-fulfillment\nof other configuration parameters of the agenda.\nThe penalty for non-visited places , aimed to help the plan-\nner to select the activities with a higher priority for the user,is calculated as the ratio between the priority of the activitiesnot included in the plan Π and the priority of the whole setof recommended activities RA:\nP\nnon visited=/summationtext\na∈RA−ΠPra\n/summationtext\na∈RAPra\nThe penalty for movements forces the planner to reduce\nthe time spent in travelling from one location to another, sothat closer activities are visited consecutively. This penalty iscalculated as the duration of the moveactions of Π, Π\nm:\nPmove=/summationdisplay\na∈Πmdur(a)\nInitially, the user defines her travel style preferences (see\nsection 2): pref#visitsrepresents the preference for the num-\nb e ro fv i s i t sa n d pref occupation denotes the user preference for\nthe time to be spent in the visits or, conversely, for the freetime between activities. The idea of combining both prefer-ences is to give response to the different travel styles describedin section 2. For example, Rose would set pref\n#visitsto ”few”\nandpref occupation to ”high”. In order to take into account\nthese preferences, two penalties are included.\nP#visitsis the penalty that considers the user preference\nfor the number of visits. It takes into account the number ofvisits in the plan with respect to the number of recommendedactivities:J.Ibañez etal./Planning Tourist Agendas forDifferentTravel Styles 1821\n⎧\n⎪⎨\n⎪⎩(|RA|−|Π v|)\n|RA|∗Tfinish:pref#visits=many\n|Πv|\n|RA|∗Tfinish:pref#visits=few\n0:pref#visits=indifferent\nPoccupation is the penalty that considers the user preference\nfor the temporal occupation. Similarly to P#visits,Poccupation\ntakes into account the time that remains available in the plan\nwith respect to the total time of the user.\n⎧\n⎨\n⎩Tfinish−/summationtext\na∈Πdur(a):pref occupation =high/summationtext\na∈Πdur(a):pref occupation =low\n0:pref occupation =indifferent\nBoth penalties return a value in the interval [0,T finish].\nThe combination of all these penalties defines the plan metric\nor optimization function to minimize by the planner:\nPtotal=Pnon visited+Pmove+P#visits+Poccupation\n4.3 Actions\nThree different types of actions are defined in this tourismdomain. Due to space restrictions, we will only focuson the visitaction. The input parameters of this ac-\ntion are the activity to perform ?aand the user ?y.\nThe duration of the action is defined within the interval(min\nvisit time ?a) and (max visit time ?a).M o r e o v e r ,\nthis duration must be smaller than the remaining availabletime (total\navailable time). The planner will choose the\nactual duration of the action according to these constraints.The conditions for this action to be applicable are: (1) theuser must be located in ?aduring the whole execution of the\naction; (2) the POI ?ais open during the whole execution\nof the action and (3) the activity ?ahas not been performed\nyet. The effects of the action assert that (1) the activity isdone, (2) the number of visited locations is increased and (3)the user available time is updated according to the activityduration. The action to perform the activity of having lunch\nis similarly defined to the action visit. The action of moving\nbetween locations essentially modifies the current location oft h eu s e r ,t h ea v a i l a b l et i m eo ft h eu s e ra n dt h et i m es p e n tin travelling from one location to another according to theduration stored in the database.\nRegarding the periodical update of the information, only\nthe location of restaurants and distances between restaurantsand monuments are not retrieved beforehand because the listof restaurants is rather changeable. The planner deals with a’dummy’ restaurant, which is instantiated to a real restaurantthat matches the user’s tastes after planning.\n5 CASES OF STUDY\nIn this section, we show some cases of study and we analyzewhether the resulting plans of the OPTICplanner are compli-\nant with the user preferences. We use two metrics to measurethe plan quality:\nO\nΠ=/summationtext\na∈Πdur(a)\nTfinishUΠ=/summationtext\na∈Πv(Pra∗dur(a))/summationtext\na∈Πvdur(a)\nOΠis the occupation rate of the plan; i.e., the total time\nduring which the user is performing some action (visiting,\nFigure 4. Plan generated for case studies C1 and C2\nmoving or having lunch). UΠis the utility of the plan, defined\nas the rate between the priority of the activities performed ina given interval and the total duration of such activities. U\nΠ\nreturns a value in [0, 300].\nFirst, we performed a comparison to see how the selection\nof the mode of transport affects the final plan. Figure 4 showsthe paths obtained for two cases: C1 and C2. C1 represents abasic case, where the user only specifies he would rather walk.In this case, the focus of the planner is on finding the bestroute taking into account the degree of interest of the user inthe POIs, the opening hours and the reduction of the walkingtime. The resulting plan is an agenda with O\nΠ=9 0.625% and\nUΠ= 217.22. The case C2 differs from C1 in that the user\ncan either walk, or use the public transport when the distancebetween two consecutive places is greater than a threshold.In this case, the system generates routes that include POIs inwhich the user is highly interested but are far away from eachother, returning an agenda with a higher utility. For example,the user is advised to use the public transport to visit Museo\nPrincipe Felipe, given that this POI is not within walkingdistance of the previous visited POI in the plan. In this case,O\nΠ=6 4.57% and UΠ= 251.62.\nIn the next experiment, we selected a fixed initial and final\nlocations, the available time slot, time reserved for lunch andtransportation means, and we generated a set of cases with allthe possible combinations of pref\n#visitsandpref occupation .\nThe results are shown in Table 1. Columns #visits and\noccupation indicate the value of the preferences pref#visits\nandpref occupation , respectively. Column #POIsshows the\nnumber of POIs included in the agenda, whereas columnsmoveandvisitindicates the percentage of the time devoted to\nmove and visit actions, respectively. Finally, columns O\nΠand\nUΠindicate the occupation rate and the utility of the plan.\nThe results show that the preferences indicated by the user\nare effectively reflected in the agenda. We can observe thatJ. Ibañez et al. / Planning Tourist Agendas for Different Travel Styles 1822\n#visits occupa #POIs move visit OΠ UΠ\nIndiff Indiff 3 7.228.752.59 220.96\nIndiff High 421.661.64 99.97 242.35\nIndiff Low 29.8115.37 42.22 216.74\nMany Indiff 410.37 41.11 68.14 246.74\nMany High 421.66 61.64 99.97 242.35\nMany Low 410.37 37.59 64.62 217.68\nFew Indiff 2 6.619.44 42.77 236.66\nFew High 321.66 60.999.23 242.01\nFew Low 29.8115.37 42.22 216.74\nTable 1. Cases of study with different travel styles\nwhen only one preference is set, clearly the other preference\ninfluences the final result of the agenda. For example, whenpref\n#visitsis set to ’Indiff’, the difference in OΠis more than\n57%. This also happens when pref occupation is set to ’Indiff’,\nwhere the number of visited POIs goes from 4 to 2, dependingon the value of pref\n#visits.\nWhenpref#visitsis set to ’Many’, the number of POIs is\nthe highest (4), but we can observe a clear difference in OΠ\ndepending on the value of pref occupation :i fi ti ss e t’ H i g h ’ ,O Π\nalmost reaches 100%; and if it set to ’Low’, then the value ofO\nΠislowerthanthevalueobtainedwhen pref occupation is’In-\ndiff’.Wecanfindasimilarsituationwhenthenumberofvisitsis ’Few’, where the only difference is that number of POIs toinclude in the agenda increases in 1 when pref\noccupation is set\nto ’High’.\nIn the resulting plans, that we do not show due to space re-\nstrictions, we have observed that when pref occupation is’Low’,\nirrespective of the number of visits, the duration of the ac-tivity is usually set to the minimum value of the durationinterval returned by GRSK. This is reflected in that the timeofvisitwhenpref\noccupation is ’Low’ is always lower than the\nvisittimes when pref occupation is ’High’ or ’Indiff’. Obviously,\nUΠis also the lowest in these cases and the highest utility is\nalways obtained when OΠis also the highest.\nThe percentage of the time devoted to travelling ac-\ntions is usually around 10%, except in the cases where thepref\noccupation is ’High’. This is because, in this particular case\nof study, the user must travel to a distant POI to obtain ahigh value of occupation.\nThe tourist-tailored plans obtained in the cases of study are\nthe result of the planner’s performance and of a faithful andconsistent modeling of the user preferences and corresponding\npenalties.\n6 CONCLUSIONS\nThis paper describes e-Tourism2.0, an enhanced recommen-\ndation and planning system for tourist activities in the city of\nValencia (Spain). e-Tourism2.0 offers a personalized recom-\nmendation of the duration of the visits suited to the interestof the user in the place to visit. It also handles user prefer-ences related with the configuration of the agenda, particu-larly travel style preferences in terms of the number of placesto visit and the desired temporal occupation of the tour.\nWe tested the adaptiveness of the plans to the user prefer-\nences through some cases of study. From the results we canconclude that an accurate modeling of the user preferences isvery relevant to obtain plans that effectively reflect the tastesand travel style preferences of the tourist.\nACKNOWLEDGEMENTS\nThis work has been partly supported by the SpanishMINECO under the project TIN2014-55637-C2-2-R, and theValencian project PROMETEOII/2013/019.\nREFERENCES\n[1] J. Benton, Amanda Jane Coles, and Andrew Coles, ‘Tempo-\nral planning with preferences and time-dependent continuous\ncosts’, in Proc. Int. Conference on Automated Planning and\nScheduling, (2012).\n[2] Joan Borr` as, Antonio Moreno, and A¨ ıda Valls, ‘Intelligent\ntourism recommender systems: A survey’, Expert Syst. Appl.,\n41(16), 7370–7389, (2014).\n[3] Luis A. Castillo, Eva Armengol, Eva Onaindia, Laura Se-\nbastia, Jes´ us Gonz´ alez-Boticario, Antonio Rodr´ ıguez, Susana\nFern´andez, Juan D. Arias, and Daniel Borrajo, ‘samap: An\nuser-oriented adaptive system for planning tourist visits’, Ex-\npert Syst. Appl., 34(2), 1318–1332, (2008).\n[4] Inma Garcia, Sergio Pajares, Laura Sebastia, and Eva Onain-\ndia, ‘Preference elicitation techniques for group recommender\nsystems’, Information Sciences, 189, 155–175, (2012).\n[5] Inma Garcia, Laura Sebastia, and Eva Onaindia, ‘On the\ndesign of individual and group recommender systems for\ntourism’, Expert Systems with Applications, 38(6), 7683–\n7692, (2011).\n[6] Inma Garcia, Laura Sebastia, Eva Onaindia, and Cesar Guz-\nman, ‘e-Tourism: a tourist recommendation and planningapplication’, International Journal on Artificial Intelligence\nTools (WSPC-IJAIT) ,18(5), 717–738, (2009).\n[7] Alfonso Gerevini, Patrik Haslum, Derek Long, Alessandro\nSaetti, and Yannis Dimopoulos, ‘Deterministic planning inthe 5th International Planning Competition: PDDL3 and ex-perimental evaluation of the planners’, Artificial Intelligence,\n173(5-6), 619–668, (2009).\n[8] Ghallab M., Nau D., Traverso P., Automated Planning. The-\nory and Practice., Morgan Kaufmann, 2004.\n[9] Yohei Kurata and Tatsunori Hara, ‘Ct-planner4: Toward a\nmore user-friendly interactive day-tour planner’, in Informa-\ntion and Communication Technologies in Tourism 2014 -Proceedings of the International Conference in Dublin, Ire-land, January 2014, pp. 73–86, (2014).\n[10] Kwan Hui Lim, Jeffrey Chan, Christopher Leckie, and\nShanika Karunasekera, ‘Personalized tour recommendationbased on user interests and points of interest visit durations’,in24th International Joint Conference on Artificial Intelli-\ngence, IJCAI, pp. 1778–1784, (2015).\n[11] Antonio Moreno, Aida Valls, David Isern, Lucas Marin, and\nJoan Borr` as, ‘Sigtur/e-destination: ontology-based personal-ized recommendation of tourism and leisure activities’, En-\ngineering Applications of Artificial Intelligence, 26(1), 633–\n651, (2013).\n[12] Arturo Montejo R´ aez, Jos´ e M. Perea-Ortega, Miguel An-\ngel Garc´ ıa Cumbreras, and Fernando Mart´ ınez Santiago,‘Oti˘ um: A web based planner for tourism and leisure’, Ex-\npert Syst. Appl., 38(8), 10085–10093, (2011).\n[13] Paul Resnick and Hal R. Varian, ‘Recommender systems’,\nCommunications of the ACM ,40(3), 56–58, (1997).\n[14] Beatriz Rodr´ ıguez, Juli´ anMolina, F´ atima P´ erez, and Rafael\nCaballero, ‘Interactive design of personalised tourism routes’,\nTourism Management ,33(4), 926–940, (2012).\n[15] Steffen Staab, Hannes Werthner, Francesco Ricci, Alexander\nZipf, Ulrike Gretzel, Daniel R. Fesenmaier, C´ ecile Paris, and\nCraig A. Knoblock, ‘Intelligent systems for tourism’, IEEE\nIntelligent Systems, 17(6), 53–64, (2002).\n[16] Pieter Vansteenwegen, Wouter Souffriau, Greet Vanden\nBerghe, and Dirk Van Oudheusden, ‘The city trip planner: Anexpert system for tourists’, Expert Syst. Appl. ,38(6), 6540–\n6546, (2011).J.Ibañez etal./Planning Tourist Agendas forDifferentTravel Styles 1823", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AT3Sd1Uk4lo", "year": null, "venue": "EC2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=AT3Sd1Uk4lo", "arxiv_id": null, "doi": null }
{ "title": "Hiring a secretary from a poset.", "authors": [ "Ravi Kumar", "Silvio Lattanzi", "Sergei Vassilvitskii", "Andrea Vattani" ], "abstract": "The secretary problem lies at the core of mechanism design for online auctions. In this work we study the generalization of the classical secretary problem in a setting where there is only a partial order between the elements and the goal of the algorithm is to return one of the maximal elements of the poset. This is equivalent to the auction setting where the seller has a multidimensional objective function with only a partial order among the outcomes. We obtain an algorithm that succeeds with probability at least k-k/(k-1)((1 + log k1/(k-1))k - 1), where k is the number of maximal elements in the poset and is the only information about the poset that is known to the algorithm; the success probability approaches the classical bound of 1/e as k -> 1. On the other hand, we prove an almost matching upper bound of k-1/(k-1) on the success probability of any algorithm for this problem; this upper bound holds even if the algorithm knows the complete structure of the poset.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "eCLu7dJpGQ", "year": null, "venue": "EC 2019", "pdf_link": "https://dl.acm.org/doi/pdf/10.1145/3328526.3329650", "forum_link": "https://openreview.net/forum?id=eCLu7dJpGQ", "arxiv_id": null, "doi": null }
{ "title": "Influence Maximization on Undirected Graphs: Towards Closing the (1-1/e) Gap", "authors": [ "Grant Schoenebeck", "Biaoshuai Tao" ], "abstract": "We study the influence maximization problem in undirected networks, specifically focusing on the independent cascade and linear threshold models. We prove APX-hardness (NP-hardness of approximation within factor (1-τ) for some constant τ>0$) for both models, which improves the previous NP-hardness lower bound for the linear threshold model. No previous hardness result was known for the independent cascade model. As part of the hardness proof, we show some natural properties of these cascades on undirected graphs. For example, we show that the expected number of infections of a seed set S is upper-bounded by the size of the edge cut of S in the linear threshold model and a special case of the independent cascade model called the weighted independent cascade model. Motivated by our upper bounds, we present a suite of highly scalable local greedy heuristics for the influence maximization problem on both the linear threshold model and the weighted independent cascade model on undirected graphs that, in practice, find seed sets which on average obtain 97.52% of the performance of the much slower greedy algorithm for the linear threshold model, and 97.39% of the performance of the greedy algorithm for the weighted independent cascade model. Our heuristics also outperform other popular local heuristics, such as the degree discount heuristic by Chen et al.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "7cqpV6MKf_B", "year": null, "venue": "EAMT 2011", "pdf_link": "https://aclanthology.org/2011.eamt-1.18.pdf", "forum_link": "https://openreview.net/forum?id=7cqpV6MKf_B", "arxiv_id": null, "doi": null }
{ "title": "Deriving translation units using small additional corpora", "authors": [ "Carlos A. Henríquez Q.", "José B. Mariño", "Rafael E. Banchs" ], "abstract": null, "keywords": [], "raw_extracted_content": "Deriving translation units using small additional corpora.\nCarlos A. Henr ´ıquez Q.\nTALP Research Centre\nBarcelona, Spain\[email protected]´e B. Mari ˜no\nTALP Research Centre\nBarcelona, Spain\[email protected] E. Banchs\nInstitute for Infocomm Research\nSingapore\[email protected]\nAbstract\nWe present a novel strategy to derive new\ntranslation units using an additional bilin-\ngual corpus and a previously trained SMT\nsystem. The units were used to adapt the\nSMT system. The derivation process can\nbe applied when the additional corpus is\nvery small compared with the original train\ncorpus and it does not require to compute\nnew word alignments using all corpora.\nThe strategy is based in the Levenshtein\nDistance and its resulting path. We re-\nported a statistically significant improve-\nment, with a confidence level of 99%,\nwhen adapting an Ngram-based Catalan-\nSpanish system using an additional corpus\nthat represents less than 0.5%of the orig-\ninal train corpus. The additional transla-\ntion units were able to solve morphologi-\ncal and lexical errors and added previously\nunknown words to the vocabulary.\n1 Introduction.\nStatistical Machine Translation (SMT) systems are\ntrained using parallel corpora. Therefore, once the\nsystem is trained and tuned, it is tightly coupled to\nthe specific domain the train corpus belongs to. If\nlater on we want to use additional bilingual corpora\nto improve or adapt our system, we could build\nadditional translation models and interpolate them\nwith the original one or we could join all the addi-\ntional data with the original corpus and train a new\nsystem from scratch. However, those strategies of-\nten involve computing new word alignments con-\nsidering all corpora together, which is a computa-\ntional expensive task.\nc/circlecopyrt2011 European Association for Machine Translation.This study focuses on the use of additional bilin-\ngual corpora to adapt a previously trained SMT\nsystem, without the need to recompute word align-\nments. The proposed method utilizes the SMT sys-\ntem to translate the source side of the new corpus\nand then compares the translation output with its\ntarget side. This comparison allows the method to\ndetect errors made during decoding and provide it\nat the same time with a possible solution, which is\nfinally used to build additional translation units.\nWe have experimented with a Ngram-based\nSMT system (Mari ˜no et al., 2006), translating\nfrom Catalan into Spanish and we have obtained\na significant improvement in translation quality,\nadapting a state-of-the-art system trained with a\ncorpus of more than four million sentences with an\nadditional corpus of only 1.6thousand sentences.\nThis document is organized as follows: Sec-\ntion 2 introduces us to the concept of Statistical\nMachine Translation, with an emphasis in Ngram-\nbased SMT; Section 3 presents a description of\nthe possible scenarios where the proposed strategy\ncould be used, domain adaptation (subsection 3.1)\nand user feedback (subsection 3.2); Section 4 de-\nscribes the experimental set-up, it details the base-\nline system in subsection 4.1 and the additional\ncorpus in subsection 4.2, it also explains the main\nalgorithm to derive, filter and interpolate the addi-\ntional translation units with the baseline translation\nmodel (subsections 4.3, 4.4 and 4.5); finally, Sec-\ntion 5 presents and analyzes the results obtained\nwith the new translation system while Section 6\nsummarize our findings.\n2 Ngram-based Machine Translation.\nThe idea of Statistical Machine Translation (SMT)\nrelies on the translation of a source language sen-\ntencef(usually referred as “French”) into a tar-Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 121\u0015128\nLeuv en, Belgium, Ma y 2011\nget language sentence ˆe(usually referred as “En-\nglish”). Among all possible target language sen-\ntencesewe choose the one with the highest score,\nas show in equation (1):\nˆe= arg max\ne/bracketleftBiggM/summationdisplay\nm=1λmhm(f,e)/bracketrightBigg\n(1)\nThis equation, called the log-linear model, is a\nvariation of the source-channel approach to SMT\n(Brown et al., 1990). It was proposed by Och and\nNey (2002) and allows using more than two mod-\nels and to weight them independently.\nFrequently used paradigms of SMT based on the\nlog-linear model are Phrase-based SMT (Koehn\net al., 2003), Hierarchical-based SMT (Chiang,\n2007) and Ngram-based SMT (Mari ˜no et al.,\n2006). In our experiments we used the Ngram-\nbased approach.\nThe Ngram-based approach relies on the con-\ncept of tuple. A tuple is a bilingual unit with\nconsecutive words both on the source and target\nside that is consistent with the word alignment.\nThey must provide a unique monotonic segmenta-\ntion of the sentence pair and they cannot be inside\nother tuple. This unique segmentation allows us\nto see the translation model as a language model,\nwhere the language is composed of tuples instead\nof words. That way, the context used in the trans-\nlation model is bilingual and implicitly works as a\nlanguage model with bilingual context as well. In\nfact, while a language model is required in phrase-\nbased and hierarchical phrase-based systems, in\nNgram-based systems it is considered just an ad-\nditional feature.\nThis alternative approach to a translation model\ndefines the probability as:\nP(f,e) =N/productdisplay\nn=1P/parenleftbig\n(f,e)n|(f,e)n−1,..., (f,e)1/parenrightbig\n(2)\nwhere (f,e)nis the n-th tuple of hypothesis efor\nthe source sentence f.\nAs additional features, we used:\n•A Part-Of-Speech (POS) language model for\nthe target side.\n•A target word bonus model.\nWe used the open source decoder MARIE\n(Crego et al., 2005) to build the different Ngram-\nbased systems.\nFigure 1: Revised corpus composition\n3 Problem Statement.\nSuppose we have a previously trained SMT sys-\ntem built with a large bilingual corpus (millions\nof sentences) that have an acceptable performance\nin the area it was designed for. We would like\nto adapt that system to scenarios it has not seen,\nfor instance it was trained to translate Parliament\nsessions and we would like to translate news arti-\ncles or tourist dialogues; another examples could\nbe that we would like to renew its vocabulary cov-\nerage and writing style or that we would like to\ncorrect errors we have seen during translation.\nIn order to adapt our system we have available a\nsmall bilingual corpus (a few thousands sentences)\nspecific to the problem we plan to solve. The idea\nis to use that corpus to generate translation units\nthat will be added to our trained system without an\nalignment process that would involve the use of all\nthe original parallel corpus.\nBesides the additional bilingual corpus we also\nhave the translation output of its source side com-\nputed with the system we want to adapt. Therefore\nour new data, named revised corpus , has actually\nthree parts: The source side of the bilingual cor-\npus, the target output (computed with the trained\nsystem) and the target correction (the target side of\nthe bilingual corpus).\nWe present now two different cases that illus-\ntrates the scenario described before. Additionally,\nwe can see a graphical description of the general\ncase in Figure 1.\n3.1 Domain adaptation.\nBecause SMT systems are tightly coupled to their\ncorpus domain, they are prone to commit errors\nwhen they translate sentences that belong to a dif-\nferent domain. For instance, a SMT system trained\nwith the Europarl Corpus (Koehn, 2005) may not\ntranslate movie reviews as expected.122\nText corpora can be different in vocabulary,\nstyle or grammar and a method to adapt to dif-\nferent domains is preferred than building a whole\nnew system for each domain we face. Moreover,\nit might be the only plausible solution if we have\na big out-of-domain parallel corpus but a small in-\ndomain corpus which, if used alone, would per-\nform poorly.\nDifferent methods have been studied to perform\nsuch adaptation, and they all require a small in-\ndomain corpus whether it is bilingual or not, for the\nsystem to adapt. Some of them include: concate-\nnate corpora and model interpolation (Koehn and\nSchroeder, 2007), using mono-lingual and cross-\nlingual information retrieval (Hildebrand et al.,\n2005; Xu et al., 2007; Snover et al., 2008), lan-\nguage model adaptation for difficult to translate\nphrases (Mohit et al., 2009), generating a synthetic\ncorpus (Ueffing et al., 2007; Schwenk and Senel-\nlart, 2009) and finally post-editing approaches (Is-\nabelle et al., 2007) combined with incremental\ntraining (Hardt and Elming, 2010).\nThe strategy proposed here assumes we have an\nout-of-domain system and a revised corpus that is\ncomposed of a small bilingual in-domain corpus\nand the translation of its source side computed with\nthe system we like to adapt.\n3.2 User feedback.\nSimilar to domain adaptation, user feedback is also\na valid scenario for the proposed strategy. In this\ncase, a previously trained SMT system is used\nto translate sentences provided by different users.\nThen, if the users consider it convenient, they can\nsuggest a better translation than the one the sys-\ntem obtained. If we saved all those suggestions\ntogether with the input sentence and the transla-\ntion output, eventually we would have an addi-\ntional bilingual corpus with its corresponding sys-\ntem translation, which fits the definition of a re-\nvised corpus.\nIn spite of the frequent use of online machine\ntranslators, the users do not tend to send feed-\nback to improve them and even when they do, it\nis hardly useful. Usually, the system offers the\nfunctionality of sending feedback without restrain.\nTherefore, the collect algorithms must confront\nwith vicious feedbacks, orthographic errors, text\nunrelated with the original query, etc.\nFor that reason, to exploit user feedback we have\nto deal with two different problems: how to filterBaseline Train Catalan Spanish\nNumber of sentences 4.6MM\nRunning words 96.94MM 96.86MM\nV ocabulary size 1.28MM 1.23MM\nTuning Catalan Spanish\nNumber of sentences 1,966\nRunning words 46.76K 44.66K\nV ocabulary size 9.1K 9.4K\nTable 1: Baseline and tuning corpora.\nuser feedback so we keep only the valuable data;\nand how to use the selected data to adapt the ma-\nchine translation system. This research addresses\nthe second task.\n4 Experimental set-up.\nWithout loss of generality we present the deriva-\ntion process in the ambit of domain adaptation.\nThe objective is to adapt an already tuned SMT\nsystem, trained with a corpus collected from old\nnews, with a small additional corpus collected\nfrom more recent news. We do not plan to change\nthe news domain but adapt it to modern times,\nadding new vocabulary and adapting the writing\nstyle.\n4.1 Baseline system and corpus.\nWe started with the UPC’s Catalan-Spanish sys-\ntem (named N-II), an Ngram-based SMT system\nwhich uses syntactic and morphological knowl-\nedge to improve its translations. A complete de-\nscription of it and its translation quality can be\nfound in Farr ´us (2009). It was built with news ar-\nticles collected from the bilingual newspaper “El\nPeridico” during the period 2,000-2,007. Table 1\nshows the statistics of this corpus.\nIt also includes a Part-Of-Speech (POS) 5-gram\ntarget language model, built with the POS version\nof the training corpus and a target word bonus\nmodel. The syntactic analysis was performed\nwith Freeling (Padr ´o et al., 2010), the translation\nmodel and the POS language model were built\nwith SRILM (Stolcke, 2002).\nN-II has an online version1available, which\nalso includes an spell-checker, where the users can\nask for translations and provide suggestions for\nbetter translation.\n1http://www.n-ii.org123\nCatalan Spanish\nNumber of sentences 155K\nRunning words 3,43M 3,43M\nV ocabulary size 187K 184K\nTable 2: Statistics of “El Peridico” corpus from\n2,008.\nCorrection Catalan Spanish\nNumber of sentences 1,608\nRunning words 34.67K 35.17K\nV ocabulary size 11.02K 11.17K\nTest Catalan Spanish\nNumber of sentences 2,048\nRunning words 46.03K 46.00K\nV ocabulary size 13.28K 13.39K\nTable 3: Experimental corpora.\n4.2 Revised corpus construction.\nAn additional corpus was also collected from the\nnewspaper “El Peridico” but only with news from\n2,008. It has a total of 155Ksentences. A sum-\nmary of its statistics can be seen in Table 2. We\nused N-II to obtain the translation output. For the\npurpose of these experiments, we only used two\nsmall subsets that were built taking samples with-\nout replacement, for training and testing. The first\nsubset is called “Correction Corpus”, it has 1.6K\nsentences and it was used to build the revised cor-\npus, translating the Catalan side into Spanish with\nN-II. The second subset is the test corpus, it has\n2,048 sentences and it was used to measure the\ntranslation quality of the different systems. Table\n3 outlines the statistics of both corpora.\n4.3 Derivation process.\nThe derivation process is based on the comparison\nbetween the target output and the target correction\nof the revised corpus. This comparison is com-\nputed using the Levenshtein Distance.\nThe Levenshtein Distance is defined as the min-\nimum number of edits needed to transform one\nstring into the other; being insertion, deletion and\nsubstitution of a single character the allowed edit\noperations. The concept can be applied to full\nsentences as well, considering the sentence as the\ncomplete string and each word as a single charac-\nter.\nFirst we compute the Levenshtein distance be-tween a target output sentence and its target cor-\nrection. While we compute the distance we also\nkeep track of its path in order to recover the min-\nimum sequential steps to change the target output\ninto its correction. Let pbe a Levenshtein path\nfrom target output t1to correction t2ands,e,d,a\nthe possible values of each step within the path,\nwhich stand for “replace a word”, “do nothing”,\n“delete a word” and “add a word” respectively.\nOnce we obtain p, we identify the longest sub-\nstringsskthat match one of the following regular\nexpression (they are checked in order):\n[sda]∗s[sda]∗ (3)\ne[da] + (4)\n/hatwide[da] +e (5)\nwhich represent a change zone.\nThen, for each change zone skofpwe extract\nthe related words from t1andt2to build an output-\nto-correction translation unit. If the number of re-\nlated words either in t1ort2is greater than 10, the\nchange zone is not valid for the next step.\nFinally, in order to obtain the unique mono-\ntonic segmentation of the source-correction sen-\ntence pair, we start from left to right using the\noriginal units; whenever we find a unit whose tar-\nget words are involved in a valid sk, we replace\nthose target words for their corresponding correc-\ntion words, according to the output-to-correction\ntranslation unit built previously. In case we find\nconsecutive units whose target words are involved\nin a validsk, first we join the consecutive units to\nform a larger single unit and then we perform the\nreplacement as explained before.\nWe can see a graphic example of the whole pro-\ncess in Figure 2. There, we have an output sen-\ntence provided by the system, “con un largo fin de\nsemana ya puede haber lo suficiente” (a weekend\nmay be enough), and a correction, “ un largo fin de\nsemana ya puede bastar ”. With this sentence pair\nwe computed the Levenshtein distance, which is 4,\nand the Levenshtein path p=deeeeeeeddse . The\npath indicates that the first, ninth and tenth out-\nput words must be deleted and the eleventh word\nmust be replaced. According to the regular expres-\nsions (3), (4) and (5), this example gives us two\ndifferent change zones, s1=deat the beginng\nands2=dds at the end (section (i) in the fig-\nure); froms1we have the output-to-correction unit\n(“con un”, “un”) and froms2we have (“haber lo124\nFigure 2: Example of the deriving process. (i) First we compute the Levenshtein path between output and\ncorrection (change zones and related words are in bold), (ii) then we segment the pair input-correction\nconsidering the original tuples c,d,e,f,g,h,i and change zones 0and8. (iii) Shows the final monotonic\nsegmentation with the new tuples added.\nsuficiente”,“bastar”) . Finally, to segment the sen-\ntence pair we started from left to right, joining the\nfirst two consecutive units, because their target out-\nput words were involved in s1, and replacing their\ntarget output words for their corresponding correc-\ntion words; then we used the next seven original\nunits without change and we joined the last two\noriginal units, because of s2, and replaced their tar-\nget output words as well. The final monotonic seg-\nmentation can be seen in section (iii) in the figure.\n4.4 Filter process.\nOnce we have the sentence segmentation, we apply\na lexical filter which takes into account the lexical\ncosts of the tuple (ignoring unknown words during\ncomputation), and set a threshold in the average of\nthe lexical costs to remove all expensive units from\nthe tuple vocabulary.\nThis filter is important because it deals with\n“new tuples” whose source and target side are not\ntranslations of each other, like (“s d’ un any i mig”\n, “son dos vasos comunicantes”) .\nTable 4 shows the vocabulary size of the differ-\nent set of tuples: how many we had in the base-\nline system, how many were extracted from the re-\nvised corpus with and without the filter described\nabove and finally, how many of those extracted tu-\nples were not seen in the baseline system vocabu-\nlary.\n4.5 Interpolation process.\nWith all problematic units removed from the vo-\ncabulary, we built the enhanced translation model\nfollowing these steps:\n1. We pruned the extracted tuples set, removingSystem Tuples\nBaseline 1.16MM\nFilter Extracted Unseen\nNone 8,511 1,360\nLexical 8,307 1,097\nTable 4: Extracted and unseen tuples.\nall units that had more than 10words either\nin the source or the target side and leaving the\n20 most frequent translation options for each\ntuple’s source side.\n2. We added the new remaining tuples to the vo-\ncabulary of baseline units.\n3. We built a 3-gram translation model with in-\nterpolate estimates and modified Kneser-Ney\ndiscounting, considering the vocabulary de-\nfined in the previous step and using only the\nrecently segmented corpus.\n4. We interpolated the resulting translation\nmodel with the baseline translation model.125\nα 0.75 0.80 0.85 0.90 0.95\nDev 84.01 84.05 84.12 84.07 83.78\nTable 5:α-values used to interpolate the transla-\ntion models and the corresponding BLEU scores\nafter tuning.\nSystem Corr. Test Conf.\nBaseline 76.87 77.19 -\n+Rev.tuples 85.96 77.23 87.24%\n+Rev.tuples+lex.fil. 83.85 77.33 99.20%\nTable 6: Systems tested and their BLEU scores.\nThe linear interpolation process followed the\nformula:\nTM(n) =αTM Base(n) + (1−α)TMAdd(n)\n(6)\nwhereTM(n)is the resulting translation model\nscore for the n-gram n,TMBase is the baseline\ntranslation model and TMAddis the new transla-\ntion model computed in the third step.\nTo determine the value of αwe tuned the sys-\ntem considering five different values and kept the\none that obtained the highest BLEU score. Table\n5 shows the different BLEU scores obtained with\nthe development set and α= 0.85as the best can-\ndidate.\n5 Results and discussion.\nWe built two different systems and used α= 0.85\nfor the interpolation. The results obtained over the\ncorrection and test corpora can be seen in Table\n6. The second and third column correspond to the\nBLEU (Papineni et al., 2001) scores obtained by\nthe different systems in the Correction and Test\ncorpora, using only one reference. The fourth col-\numn gives the confidence level for test BLEU be-\ning higher than the baseline test BLEU.\nNotice that all revised system performed better\nin the correction corpus, which is obvious because\nit is part of the revised corpus. Also, they all im-\nproved the baseline test score. What is interesting\nis that once we added the lexical filter, the correc-\ntion BLEU decreased and the test BLEU increased.\nIt means that the filter is helping the system gener-\nalize its learning.\nMoreover, even though the revised system with-\nout filter is not significantly better than the base-\nline, we found that with the lexical filter weSystem Higher Lower Same\nBaseline 122 154 1,772\n+Rev.tuples+lex.fil. 154 122 1,772\nTable 7: Sentence by sentence comparison of\nBLEU score\nachieved a better performance, with a confidence\nlevel of 99%. The confidence levels were obtained\nusing the “Pair Bootstrap Resampling” method de-\nscribed in Koehn (2004).\nBesides the automatic test described before, we\nalso compute the BLEU scores over the test set,\nsentence by sentence, with the baseline and the fi-\nnal system; then, we compared them to determine\nwhich sentences were better (or worse) in the fi-\nnal system and why. Table 7 shows these results.\nWe can see that the final system was better in 154\nsentences, the baseline system was better in 122\nand that they got the same score in the remaining\n1,772. We took a closer look at those 122 sen-\ntences and found that most of the final system out-\nputs had used synonyms and paraphrases and that\nthey were indeed valid although they were not used\nin the reference. On the other hand, we found some\nsemantic, lexical and morphological errors solved\namong the 154 sentences where the final system\nhad a better score.\nFigure 3 shows a sample of both subsets, dis-\nplaying first the baseline output and then the fi-\nnal system output. The first case corrected a\nword-by-word translation; it means “besides”. The\nsecond and sixth pair are example of sentences\nwith unknown words, “cirllic” and “Govern”, that\nare solved with their correct translation by the\nfinal system, “cirlico” and “Gobierno Catal ´an”;\ntheir English translation are “Cyrillic” and “Cata-\nlan Government”. The third one was the cata-\nlan word “drets” that has two meanings and the\nwrong one, “de pie”, was chosen by the baseline;\n“de pie” stands for “stood (up)” while “derecho”\nmeans “right”, like “human right”. The fourth fi-\nnal system output corrected a morphological error,\nchoosing the verb with the proper person and num-\nber. Pair number five presents two synonyms. The\nseventh pair adds a preposition that could also be\nomited, as the baseline output did. Finally, the last\npair presents two different ways of saying the same\n(“on the other hand”) and they are equally valid\neven though the BLEU score is lower in the final\nsystem because the reference matches the baseline.126\nFigure 3: Output samples from the baseline and final systems. Every pair presents first the baseline\noutput (labeled “B”) and then the final system output (labeled “F”). The first four pairs are examples of\na higher final BLEU, the last four pairs had a higher baseline BLEU.\n6 Conclusions and further work.\nWe have presented a strategy to enhance a transla-\ntion model with new and reinforced units using a\nrevised corpus. A revised corpus was defined as a\nbilingual corpus together with the automatic trans-\nlation of its source side. Therefore it is composed\nof a source side (coming from the bilingual cor-\npus), a target output (coming from the translation\nsystem) and a target correction (coming from the\nbilingual corpus).\nThe strategy produces an adapted translation\nmodel with additional translation units and vocab-\nulary, without the need of computing expensive\nword alignments or using the baseline corpus. In-\nstead, it is based in the structure of the original\ntranslation units and the alignment provided by the\ncomparison between the target output and the tar-\nget correction.\nThis strategy consists in computing a sentence-\nby-sentence Levenshtein path, using the target out-\nput and the target correction. The Levenshtein path\nallows us to correct local errors found during de-\ncoding and to combine them with the source side\nto add additional tuples in the original translation\nmodel. At the same time, the method reinforces\nthe original tuples that were correctly used during\ndecoding in a specific context. We also defined a\nlexical filter that must be used to remove problem-\natic units found during the extraction phase.Results showed a statistical improvement with\na confidence level of 99% in a state-of-the-\nart Catalan-to-Spanish Ngram-based SMT system.\nThis was achieved using a bilingual corpus of\n1.6Ksentences, which represents less than 0.5%\nof the original corpus.\nWe plan to continue this line of research test-\ning different language pairs and SMT paradigms.\nFirst, we will try an Spanish-to-English Ngram-\nbased SMT system and then we will change it to\na Phrase-based SMT systems. The Spanish-to-\nEnglish experiments will explore the strategy for\ndomain adaptation, using a big out-of-domain cor-\npus to train the baseline translation model and a\nsmaller in-domain bilingual corpus to derive the\nunits from.\nAcknowledgment.\nThe research leading to these results has received\nfunding from the European Community’s Sev-\nenth Framework Programme (FP7/2007-2013) un-\nder grant agreement number 247762 (FAUST) and\nfrom the Spanish Ministry of Science and Inno-\nvation through the Buceador project (TEC2009-\n14094-C04-01).127\nReferences\nBrown, Peter F., John Cocke, Stephen A. Della Pietra,\nVincent J. Della Pietra, Frederick Jelinek, John D.\nLafferty, Robert L. Mercer, and Paul S. Rossin.\n1990. A Statistical Approach to Machine Transla-\ntion. Computational Linguistics , 16:79–85.\nChiang, David. 2007. Hierarchical Phrase-Based\nTranslation. Computational Linguistics , 33(2):201–\n228.\nCrego, Josep M., Adri `a de Gispert, and Jos ´e B. Mari ˜no.\n2005. An Ngram-based Statistical Machine Transla-\ntion Decoder. In Proceedings of 9th European Con-\nference on Speech Communication and Technology\n(Interspeech) .\nFarr´us, Mireia, Marta R. Costa-juss `a, Marc Poch,\nAdolfo Hern ´andez, and Jos ´e B. Mari ˜no. 2009. Im-\nproving a catalan-spanish statistical translation sys-\ntem using morphosyntactic knowledge. In Proceed-\nings of European Association for Machine Transla-\ntion 2009 .\nHardt, Daniel and Jakob Elming. 2010. Incremental\nre-training for post-editing SMT. In AMTA 2010:\nthe Ninth conference of the Association for Machine\nTranslation in the Americas .\nHildebrand, Almut Silja, Matthias Eck, Stephan V ogel,\nand Alex Waibel. 2005. Adaptation of the Transla-\ntion Model for Statistical Machine Translation based\non Information Retrieval. In EAMT 2005 Confer-\nence Proceedings .\nIsabelle, Pierre, Cyril Goutte, and Simard Michel.\n2007. Domain Adaptation of MT Systems Through\nAutomatic Post Editing. In Machine Translation\nSummit XI .\nKoehn, Philipp and Josh Schroeder. 2007. Experi-\nments in Domain Adaptation for Statistical Machine\nTranslation. In Proceedings of the Second Workshop\non Statistical Machine Translation .\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation. In HLT-\nNAACL , pages 48–54.\nKoehn, Philipp. 2004. Statistical significance tests for\nmachine translation evaluation. In Proceedings of\nEMNLP , volume 4, pages 388–395.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation. In Machine Trans-\nlation Summit .\nMari ˜no, Jos ´e B., Rafael E. Banchs, Josep M. Crego,\nAdri `a de Gispert, Patrik Lambert, Jos ´e A. R. Fonol-\nlosa, and Marta R. Costa-juss `a. 2006. Ngram-based\nMachine Translation. Computational Linguistics ,\n32(4):527–549.\nMohit, Behrang, Frank Liberato, and Rebecca Hwa.\n2009. Language Model Adaptation for Difficult to\nTranslate Phrases. In Proceedings of the 13th An-\nnual Conference of the EAMT .Och, Franz Josef and Hermann Ney. 2002. Discrim-\ninative Training and Maximum Entropy Models for\nStatistical Machine Translation. In Proceedings of\nthe 40th Annual Meeting of the Association for Com-\nputational Linguistics (ACL) .\nPadr ´o, Llu ´ıs, Miquel Collado, Samuel Reese, Marina\nLloberes, and Irene Castell ´on. 2010. FreeLing 2.1:\nFive Years of Open-Source Language Processing\nTools. In Proceedings of 7th Language Resources\nand Evaluation Conference (LREC 2010) , La Val-\nleta, Malta, May.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2001. Bleu: a method for automatic evalu-\nation of machine translation. IBM Research Report,\nRC22176, September.\nSchwenk, Holger and Jean Senellart. 2009. Translation\nmodel adaptation for an Arabic/French news transla-\ntion system by lightly-supervised training. In MT\nSummit .\nSnover, Matthew, Bonnie Dorr, and Richard Schwartz.\n2008. Language and Translation Model Adaptation\nusing Comparable Corpora. In Proceedings of the\n2008 Conference on Empirical Methods in Natural\nLanguage Processing .\nStolcke, A. 2002. Srilm - an extensible language mod-\neling toolkit. In International Conference on Spoken\nLanguage Processing .\nUeffing, Nicola, Gholamreza Haffari, and Anoop\nSarkar. 2007. Semi-supervised Model Adaptation\nfor Statistical Machine Translation. Machine Trans-\nlation , 21:77–94.\nXu, J., Y . Deng, Y . Gao, and Hermann Ney. 2007.\nDomain Dependent Statistical Machine Translation.\nInMachine Translation Summit , Copenhagen, Den-\nmark, September.128", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lkWvTvA27SP", "year": null, "venue": "EAMT 2008", "pdf_link": "https://aclanthology.org/2008.eamt-1.15.pdf", "forum_link": "https://openreview.net/forum?id=lkWvTvA27SP", "arxiv_id": null, "doi": null }
{ "title": "Word association models and search strategies for discriminative word alignment", "authors": [ "Patrik Lambert", "Rafael E. Banchs" ], "abstract": null, "keywords": [], "raw_extracted_content": "Word Association Models and Search Strategies\nfor Discriminative Word Alignment\nPatrik Lambert1and Rafael E. Banchs2\n1TALP Research Center, Jordi Girona Salgado 1 3, 08034 Barcelona, Spain?\[email protected]\n2Barcelona Media Innovation Centre, Ocata 1, Barcelona 08003, Spain.\[email protected]\nAbstract. This paper deals with core aspects of discriminative word\nalignment systems, namely basic word association models as well as\nsearch strategies. We compare various low-computational-cost word as-\nsociation models: \u001f2score, log-likelihood ratio and IBM model 1. We\nalso compare three beam-search strategies. We show that it is more \rex-\nible and accurate to let links to the same word compete together, than\nintroducing them sequentially in the alignment hypotheses, which is the\nstrategy followed in several systems.\n1 Introduction\nIn this paper, we study core aspects of discriminative alignment systems [1, 2].\nIn these systems, the best alignment hypothesis is the one that maximises a\nlinear combination of features. In Sect. 2 we propose some improvements of the\nbeam-search algorithm implemented by Moore [1]. Then we present experimen-\ntal results for di\u000berent low-computational-cost word association score features\n(Sect. 3.1) and for the proposed search strategies (Sect. 3.2). Finally, we give\nsome conclusions.\n2 Search Strategies\nSearch aims at \fnding the alignment ( i.e.the set of links between source and\ntarget words) which maximises the sum of each feature cost, weighted by its\nrespective weight. In order to limit the search space, a set of promising links\nis \frst selected. Then alignment hypotheses are created by introducing some of\nthese promising links, and the cost of each feature function for these alignment\nhypotheses is calculated.\nFigure 1 shows the list of promising links considered (referred to as the list\nofpossible links ). This list is obtained by pruning the word association feature\ntable3with a threshold N. Only the best Ntarget words for each source word,\nandthe bestNsource words for each target word are considered. Possible links\nare arranged in a certain number of stacks of links to be expanded during search.\n?This work has been partially funded by the Spanish Government under grant\nTEC2006-13964-C03 (AVIVAVOZ project).\n3It contains the word association score for each word pair seen in the training corpus.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n97\nSentence pair:\n(0)the (1)member (2)state (3).\n(0)los (1)pais (2)miembr (3).Possible links (word association cost order):\nLink Cost Corresponding words\n1-2 0.1736 member-miembr\n3-3 0.6758 .-.\n0-0 1.3865 the-los\n2-2 1.8285 state-miembr\n2-1 2.4027 state-pais\nFig. 1. Possible links list example. Word position is indicated in parentheses. Cor-\nresponding words are actually stemmed forms. Here N= 1. Notice that \\state\" is\ninvolved in two links because it is the best source word for both \\miembr\" and \\pais\".\nThe best alignment here would be f0-0,1-2,2-1,3-3g. The cost is \u0000log\u001f2(see Sect. 3.1).\n2.1 Baseline Search\nWith Moore's search strategy, which will be referred to as baseline search, links\nof the example of Fig. 1 are arranged as depicted in Fig. 2 (left \fgure). Thus\nin the baseline search the possible links, sorted in function of their cost, are\narranged one link per stack, together with the \\empty\" link set ;. Baseline search\nalways begins with the empty alignment (alignment stack 0).4This hypothesis is\nexpanded with each link of link stack 1 forming two new hypotheses (the empty\nalignment and the alignment containing the link 1-2) which are copied into\nalignment stack 1. Each hypothesis of alignment stack iis expanded with each\nlink of link stack i+ 1. Histogram and/or threshold pruning are applied to the\nalignment hypothesis stack to reduce complexity. The dashed line in alignment\nstack 2 illustrates the histogram pruning threshold for a beam size of 3.\nIn our view, the main drawback of the baseline search strategy is that the \fnal\nalignment depends on the order in which links are introduced. To understand this\nbetter, consider a very simple system with a word association feature, a distortion\nfeature and an unlinked word penalty feature. Distortion costs are caused by\ncrossings between links. Each time some unlinked word becomes linked, the\nunlinked word penalty decreases. When a hypothesis is expanded with a new\nlink, if the word association cost for this link plus a possible distortion cost is\nsmaller than a possible decrease in the unlinked word penalty, the hypothesis\nwith the new link is better than the previous one. In the example of Fig. 2 (left\n\fgure), suppose that this was the case successively for links 1-2, 3-3 and 0-0, so\nthat the best alignment hypothesis is f1-2, 3-3, 0-0g. Now if this hypothesis is\nexpanded with link 2-2, the association cost is compensated by the decrease of\nthe unlinked feature cost for \\state\", and the new best hypothesis will include\nlink 2-2. Expanding now this last hypothesis with link 2-1, the unlinked feature\ngain for \\pais\" cannot compensate for the distortion feature cost (due to crossing\n4Melamed [3] also starts with the empty alignment and links are added from most to\nleast probable.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n98\nFig. 2. Left: Baseline search: link-by-link search following word association score or-\nder [1]. Right: \\source-word-score\" search strategy.\nwith \\member-miembr\") plus the association cost. Thus link 2-1 is not included\nin the \fnal hypothesis. On the contrary, if we would expand the hypotheses with\nlink 2-1 \frst, the double unlinked feature gain (for \\pais\" and \\state\") would\ncompensate for the other costs, and link 2-1 would appear in the \fnal hypothesis.\nThus in the previous case, a probable but incorrect link (2-2) introduced\n\frst prevented the correct link (2-1) from being in the \fnal alignment, because\nof the unlinked feature. In other situations, this may occur with the distortion\nfeature, the presence of the incorrect link causing a crossing with the correct\none. Actually in many cases, when introducing link 2-1, both the new hypothesis\n(with link 2-2) and the former one (without it) will be in the stack. However,\nwhen introducing a link, it can happen that all hypotheses which do not contain\na previously introduced link have been pruned out. In this case all hypotheses\nwould contain the link 2-2 when expanding hypotheses with link 2-1, and the\nproblem described above would happen.\n2.2 Proposed Improvements\nTo help overcome this problem, we perform successive iterations of the alignment\nalgorithm. In the second one, we start from the \fnal alignment of the \frst\niteration instead of the empty alignment. Expanding a hypothesis with some\nlink still means introducing this link in the alignment hypothesis if it is not\npresent yet, but also means removing it if it is already present. Thus alignment\nhypotheses now always contain a reasonable set of links for this sentence pair:\nthe \frst iteration's \fnal links at the start, which are then updated link by link\nduring search. When a hypothesis is expanded with an incorrect link, this link\nis typically situated (considering the alignment matrix) apart from the rest of\nlinks in the hypothesis, causing a distortion cost. If a hypothesis containing no\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n99\nlink would be expanded with this incorrect link, it would not be penalised by\nany distortion cost.\nAnother idea to alleviate the problem is to let links to the same word compete\non a fair basis, considering them at the same time instead of successively in the\nalignment hypotheses. In this scheme, possible links are organised in one stack\nfor each source (or target) word,5as in Fig. 2 (right \fgure). This is a one-stack-\nper-word strategy, whereas the baseline search is a one-stack-per-link strategy.\nThe links of each stack are used to expand the same hypotheses. Thus, in our\nexample, expanding hypothesis f1-2, 3-3, 0-0g, 2-1 would have been preferred\nover 2-2.\nIn Fig. 2 (right \fgure), link stacks are sorted according to the cost of the best\nlink in the stack. We will refer to this strategy as \\source-word-score\" (SWS)\nsearch. We could also sort the link stacks according to the source word position,\nwhich will be referred to as \\source-word-position\" (SWP) search.\nThe total number of alignment hypotheses created during search is the same\nwith both baseline and one-stack-per-word strategies, since the number of \\possi-\nble links\" is equal. However, the one-stack-per-word search, as depicted in Fig. 2,\nonly allows many-to-one links since each hypothesis can only be expanded with\none of the various possible links to the same source word. To allow many-to-many\nlinks, the stacks of possible links associated with a given word must also contain\ncombinations of these links. Each combination represents an additional align-\nment hypothesis to create compared to the baseline search. However, the one-\nstack-per-word strategies also o\u000ber more \rexibility to control complexity than\nthe baseline strategy. The link stacks can be sorted and pruned by histogram\nand/or threshold. We can also limit the number of links in the combinations, or\nallow only combinations with consecutive target positions. One-stack-per-word\nstrategies also make it easy to \frst expand words with a higher con\fdence or less\nambiguity. This gives a context of links which helps aligning the other words.\nNote that an adequate solution to the problem raised in Sect. 2.1 would\nbe to estimate exactly the remaining cost of each hypothesis, but this would\nbe too expensive computationally. In one-stack-per-word strategies, the future\nword association cost (considering the most probable path) is not useful because\nit would be the same in each stack, since the same words have been covered. We\nestimated a relative distortion cost of each link with respect to the best links\n(in terms of word association score) for surrounding words remaining to cover.\nHowever, this estimation was too inaccurate and did not improve our results.\n3 Experiments\nWe used freely available6alignment test data [4]. These data are a subset of the\ntraining corpus: the TC-STAR OpenLabSpanish-English EPPS parallel corpus,\nwhich contains proceedings of the European Parliament. The training corpus\n5This is much more e\u000ecient than Liu et al.'s search [2], which considers allpossible\nlinks before selecting each link.\n6http://gps-tsc.upc.es/veu/LR\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n100\ncontains 1.28 million sentence pairs of respectively 27.2 and 28.5 words length\nin average for English and Spanish. English and Spanish vocabulary size are\nrespectively 106 and 153 thousand words. We divided randomly the alignment\nreference corpus in a 246-sentence development set and a 245-sentence test set.\nEvaluation was done with precision, recall and alignment error rate (AER) [5].\n3.1 Basic Word Association Models\nOur aim in this section is to compare very simple word association measures\nreported in the literature and which can be very useful for some applications.\nCherry and Lin [6] and Lambert et al. [7] use\u001f2scores [8]. However, Dun-\nning [9] showed that the log-likelihood ratio (LLR) was a better method of\naccounting for rare events occurring in large samples. \u001f2score indeed overes-\ntimates their signi\fcance. For example, the association between two singletons\ncooccurring in the same sentence pair gets the best possible \u001f2score, and this\nassociation is 4 orders of magnitude less than the best score according to the\nLLR statistics. The LLR score was used by Melamed [3] for automatically con-\nstructing translation lexicons and by Moore [1] as a word association feature.\nWe compared these association measures to IBM model 1 probabilities [10].\nTable 1 shows the alignment results for a basic system composed of the\nfollowing features: word association, link bonus, unlinked word penalty and two\ndistortion features (counting the number and amplitude of crossing links). The\nvalue of the word association feature was calculated as the sum of the word\nassociation costs of the links present in the alignment. This cost was simply\nobtained by taking (minus) the logarithm of respectively the \u001f2score, IBM model\n1 probabilities or the LLR score normalised to 1. For IBM model 1 probabilities,\nwe had two features, one for each direction (source-target and target-source).\nThe substitution of the \u001f2score by the more accurate LLR yielded a 11\npoints drop in precision.7IBM model 1 probabilities are better than association\nscores and yield a 3.5 points improvement over \u001f2word association scores. Of\ncourse, state-of-the-art models like IBM model 4 are expected to perform better.\nIn lines 1 to 3 of Table 1, the unlinked penalty feature is uniform. In the\n\\IBM1+UM\" system, this feature was substituted by a penalty proportional to\nmodel 1 NULL link probability, yielding a gain of 2 points in precision and 1\npoint in recall.\n7This result may be surprising at \frst sight. In fact, it makes sense. To take the same\nexample as Moore [11], in our corpus, singletons appearing in each side of the same\nsentence pair constitute a very signi\fcant event. The IBM model 1 probability in\nthis case is actually equal to 1, and the \u001f2score is also the best possible. Although\nno word can have a higher LLR score with a singleton than another singleton, the\nLLR score between more frequent words can be much higher. This makes a di\u000berence\nbecause the alignment hypotheses are expanded with the most probable links \frst.\nThus compared to \u001f2, the LLR score gives a relatively higher importance to links\ninvolving frequent words, which may be stop words, and a relatively lower importance\nto links involving less frequent words, which often are content words. Both e\u000bects\nproduce noisier alignments.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n101\nTable 1. Recall (Rs), Precision (Pp) and AER for various types of association scores\n(for stems) and search strategies. The values shown are the average and standard error\n(in parentheses) of three feature weights optimisations (from di\u000berent starting points).\nLine Rs Pp AER\nScore used as association feature (baseline search, one iteration)\n1 \u001f262.4 (0.8) 86.7 (1.5) 27.1 (0.1)\n2 LLR 59.4 (0.1) 75.7 (0.5) 33.2 (0.3)\n3 IBM1 65.9 (0.7) 90.3 (1.4) 23.5 (0.3)\n4 IBM1+UM 67.1 (0.3) 92.5 (0.4) 21.9 (0.3)\nSource-word-score (SWS) and source-word-position (SWP) searches\n5 IBM1+UM, SWS 67.1 (0.2) 93.5 (0.5) 21.6 (0.0)\n6 IBM1+UM, SWP 66.3 (0.5) 91.5 (0.4) 22.8 (0.1)\n7 IBM1+UM, SWP 2 it. 66.7 (0.5) 93.2 (0.6) 21.9 (0.1)\n8 IBM1+UM, SWP 3 it. 67.3 (0.4) 93.2 (0.4) 21.5 (0.1)\n3.2 Search\nThe three beam-search strategies described in Sect. 2 were implemented with\ndynamic programming and are compared in Table 1 (lines 4, 5 and 6). In\nthe \\source-word-position\" (SWP) strategy, since alignment hypotheses are ex-\npanded at consecutive words, it makes sense to recombine the alignment hy-\npotheses with equal recent history. Although hypothesis recombination helps,\nthis strategy gives the worst results because the \frst links introduced are not\nthe best ones. The best strategy is \\source-word-score\" (SWS), in which links to\nthe same words are compared fairly, but keeping the idea of introducing the best\nlinks \frst. This strategy allows to gain 1 point in precision over the baseline,\nwithout loss in recall.\nIn lines 1 to 6, only one iteration of the alignment algorithm was run. Lines 7\nand 8 show the e\u000bect of running two and three iterations for the SWP search. The\ninitial alignment is the best alignment obtained in the previous iteration. After\nthree iterations, the SWP search achieves comparable performance as SWS after\none iteration. SWS and baseline search AER results are actually only improved\nby 0.2 after the second iteration, and not improved by a third iteration.\n4 Conclusions\nOur results suggest that the log-likelihood ratio is not an adequate word as-\nsociation measure to be used in a discriminative word alignment system. We\nalso observed that even the simplest IBM model probabilities allow a signi\fcant\nimprovement of alignment quality with respect to word association measures. Fi-\nnally, we compared three beam-search strategies. We showed that starting from\nthe empty alignment is not the best choice, and that it is more \rexible and ac-\ncurate to let links to the same word compete together, than to introduce them\nsequentially in the alignment hypotheses.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n102\nReferences\n1. Moore, R.C.: A discriminative framework for bilingual word alignment. In: Proc.\nof Human Language Technology Conference. (2005) 81{88\n2. Liu, Y., Liu, Q., Lin, S.: Log-linear models for word alignment. In: Proc. of the\n43rd Annual Meeting of the Assoc. for Computational Linguistics. (2005) 459{466\n3. Melamed, I.D.: Models of translational equivalence among words. Computational\nLinguistics 26(2) (2000) 221{249\n4. Lambert, P., de Gispert, A., Banchs, R.E., Mari~ no, J.B.: Guidelines for word\nalignment evaluation and manual alignment. Language Resources and Evaluation\n39(4) (2005) 267{285\n5. Och, F.J., Ney, H.: A systematic comparison of various statistical alignment mod-\nels. Computational Linguistics 29(1) (March 2003) 19{51\n6. Cherry, C., Lin, D.: A probability model to improve word alignment. In: Proc. of\n41th Annual Meeting of the Assoc. for Computational Linguistics. (2003) 88{95\n7. Lambert, P., Banchs, R.E., Crego, J.M.: Discriminative alignment training with-\nout annotated data for machine translation. In: Proc. of the Human Language\nTechnology Conference of the NAACL. (2007) 85{88\n8. Gale, W.A., Church, K.W.: Identifying word correspondences in parallel texts. In:\nDARPA Speech and Natural Language Workshop. (1991)\n9. Dunning, T.: Accurate methods for the statistics of surprise and coincidence.\nComputational Linguistics 19(1) (1993) 61{74\n10. Brown, P.F., Della Pietra, S.A., Della Pietra, V.J., Mercer, R.L.: The mathe-\nmatics of statistical machine translation: Parameter estimation. Computational\nLinguistics 19(2) (1993) 263{311\n11. Moore, R.C.: On log-likelihood-ratios and the signi\fcance of rare events. In: Proc.\nof Conf. on Empirical Methods in Natural Language Processing. (2004) 333{340\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n103", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "hZzDZQrY4MN", "year": null, "venue": "EAMT 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=hZzDZQrY4MN", "arxiv_id": null, "doi": null }
{ "title": "Deriving translation units using small additional corpora", "authors": [ "Carlos A. Henríquez Q.", "José B. Mariño", "Rafael E. Banchs" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "uQzMEGaKy0q", "year": null, "venue": "EC2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=uQzMEGaKy0q", "arxiv_id": null, "doi": null }
{ "title": "Where to Sell: Simulating Auctions From Learning Algorithms.", "authors": [ "Hamid Nazerzadeh", "Renato Paes Leme", "Afshin Rostamizadeh", "Umar Syed" ], "abstract": "Ad exchange platforms connect online publishers and advertisers and facilitate the sale of billions of impressions every day. We study these environments from the perspective of a publisher who wants to find the profit-maximizing exchange in which to sell his inventory. Ideally, the publisher would run an auction among exchanges. However, this is not usually possible due to practical business considerations. Instead, the publisher must send each impression to only one of the exchanges, along with an asking price. We model the problem as a variation of the multi-armed bandits problem in which exchanges (arms) can behave strategically in order to maximizes their own profit. We propose e mechanisms that find the best exchange with sub-linear regret and have desirable incentive properties.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "X00Gr5qqGQm", "year": null, "venue": "ITP 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=X00Gr5qqGQm", "arxiv_id": null, "doi": null }
{ "title": "ProofWatch: Watchlist Guidance for Large Theories in E", "authors": [ "Zarathustra Goertzel", "Jan Jakubuv", "Stephan Schulz", "Josef Urban" ], "abstract": "Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E’s standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5S_3RwSMh-l", "year": null, "venue": "CoRR 2018", "pdf_link": "http://arxiv.org/pdf/1802.04007v2", "forum_link": "https://openreview.net/forum?id=5S_3RwSMh-l", "arxiv_id": null, "doi": null }
{ "title": "ProofWatch: Watchlist Guidance for Large Theories in E", "authors": [ "Zarathustra Goertzel", "Jan Jakubuv", "Stephan Schulz", "Josef Urban" ], "abstract": "Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E's standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods.", "keywords": [], "raw_extracted_content": "arXiv:1802.04007v2 [cs.AI] 19 May 2018ProofWatch:\nWatchlist Guidance for Large Theories in E\nZarathustra Goertzel1, Jan Jakub˚ uv1, Stephan Schulz2, and Josef Urban1⋆\n1Czech Technical University in Prague\n2DHBW Stuttgart\nAbstract. Watchlist (also hint list) is a mechanism that allows relate d\nproofs to guide a proof search for a new conjecture. This mech anism\nhas been used with the Otter and Prover9 theorem provers, bot h for\ninteractive formalizations and for human-assisted provin g of open con-\njectures in small theories. In this work we explore the use of watchlists in\nlarge theories coming from first-order translations of larg e ITP libraries,\naiming at improving hammer-style automation by smarter int ernal guid-\nance of the ATP systems. In particular, we (i) design watchli st-based\nclause evaluation heuristics inside the E ATP system, and (i i) develop\nnew proof guiding algorithms that load many previous proofs inside the\nATP and focus the proof search using a dynamically updated no tion\nof proof matching. The methods are evaluated on a large set of prob-\nlems coming from the Mizar library, showing significant impr ovement of\nE’s standard portfolio of strategies, and also of the previo us best set of\nstrategies invented for Mizar by evolutionary methods.\n1 Introduction: Hammers, Learning and Watchlists\nHammer -style automation tools connecting interactive theorem provers ( ITPs)\nwith automated theorem provers(ATPs) haverecently led to a sign ificant speed-\nup for formalization tasks [5]. An important component of such tools ispremise\nselection [1]: choosing a small number of the most relevant facts that are give n\nto the ATPs. Premise selection methods based on machine learning fr om many\nproofs available in the ITP libraries typically outperform manually spec ified\nheuristics [1,17,19,7,4,2]. Given the performance of such ATP-external guidance\nmethods, learning-based internal proof search guidance methods have started to\nbe explored, both for ATPs [36,18,15,23,8] and also in the context of tactical\nITPs [10,12].\nIn this work we develop learning-based internal proof guidance met hods for\nthe E [30] ATP system and evaluate them on the large Mizar Mathemat ical\nLibrary [11]. The methods are based on the watchlist (alsohint list) technique\ndeveloped by Veroff [37], focusing proofsearch towardslemmas ( hints) that were\n⋆Supported by the AI4REASON ERC Consolidator grant number 649043, and by the\nCzech project AI&Reasoning CZ.02.1.01/0.0/0.0/15 003/0000466 and the European\nRegional Development Fund.\nuseful in related proofs. Watchlists have proved essential in the A IM project [21]\ndone with Prover9 [25] for obtaining very long and advanced proofs of open\nconjectures. Problems in large ITP libraries however differ from one another\nmuch morethan the AIM problems,makingit morelikelyforunrelated w atchlist\nlemmas tomislead the proofsearch.Also, Prover9lacksa number of large-theory\nmechanisms and strategies developed recently for E [16,13,15].\nTherefore, we first design watchlist-based clause evaluation heur istics for E\nthat can be combined with other E strategies. Second, we compleme nt the inter-\nnalwatchlistguidancebyusingexternalstatisticalmachinelearnin gtopre-select\nsmaller numbers of watchlist clauses relevant for the current prob lem. Finally,\nwe use the watchlist mechanism to develop new proof guiding algorithm s that\nload many previous proofs inside the ATP and focus the search using adynam-\nicallyupdated heuristic representation of proof search state based on matching\nthe previous proofs.\nThe rest of the paper is structured as follows. Section 2 briefly sum marizes\nthe work of saturation-style ATPs such as E. Section 3 discusses h euristic repre-\nsentation of search state and its importance for learning-based p roof guidance.\nWe propose an abstract vectorial representation expressing sim ilarity to other\nproofs as a suitable evolving characterization of saturation proof searches. We\nalso proposea concreteimplementation based on proof completion ratios tracked\nby the watchlist mechanism. Section 4 describes the standard ( static) watchlist\nmechanism implemented in E and Section 5 introduces the new dynamic watch-\nlist mechanisms and its use for guiding the proof search. Section 6 ev aluates\nthe static and dynamic watchlist guidance combined with learning-bas ed pre-\nselection on the Mizar library. Section 7 shows several examples of n ontrivial\nproofs obtained by the new methods, and Section 8 discusses relat ed work and\npossible extensions.\n2 Proof Search in Saturating First-Order Provers\nThe state of the art in first-order theorem proving is a saturating prover based\non a combination of resolution/paramodulation and rewriting, usually imple-\nmenting a variant of the superposition calculus [3]. In this model, the proof state\nis represented as a set of first-order clauses (created from the axioms and the\nnegated conjecture), and the system systematically adds logical consequences to\nthe state, trying to derive the empty clause and hence an explicit co ntradiction.\nAll current saturating first-order provers are based on variant s of thegiven-\nclause algorithm . In this algorithm, the proof state is split into two subsets of\nclauses, the processed clauses P(initially empty) and the unprocessed clauses\nU. On each iteration of the algorithm, the prover picks one unproces sed clause\ng(the so-called given clause ), performs all inferences which are possible with g\nand all clauses in Pas premises, and then moves gintoP. The newly generated\nconsequencesareaddedto U.Thismaintainsthecoreinvariantthatallinferences\nbetween clauses in Phave been performed. Provers differ in how they integrate\nsimplification and redundancy into the system, but all enforce the v ariant that\nPis maximally simplified (by first simplifying gwith clauses in P, then back-\nsimplifying Pwithg) and that Pcontains neither tautologies nor subsumed\nclauses.\nThecorechoicepointofthe given-clausealgorithmis the selectionof the next\nclause to process. If theoretical completeness is desired, this ha s to befair, in\nthe sense that no clause is delayed forever. In practice, clauses a re ranked using\none or more heuristic evaluation functions, and are picked in order o f increasing\nevaluation (i.e. small values aregood). The most frequent heuristic s arebased on\nsymbol counting, i.e., the evaluation is the number of symbol occurr ences in the\nclause, possibly weighted for different symbols or symbols types. Mo st provers\nalso support interleaving a symbol-counting heuristic with a first-in- first-out\n(FIFO) heuristic. E supports the dynamic specification of an arbitr ary number\nofdifferentlyparameterizedpriorityqueuesthatareprocessedin weightedround-\nrobbin fashion via a small domain-specific language for heuristics.\nPrevious work [28,31] has both shown that the choice of given clause s is\ncritical for the success rate of a prover, but also that existing he uristics are still\nquite bad - i.e. they select a large majority of clauses not useful for a given proof.\nPositively formulated, there still is a huge potential for improvemen t.\n3 Proof Search State in Learning Based Guidance\nA good representation of the current stateis crucial for learning-based guidance.\nThis is quite clear in theorem proving and famously so in Go and Chess [3 2,33].\nFor example, in the TacticToe system [10] proofs are composed fr om pre-pro-\ngrammed HOL4 [34] tactics that are chosen by statistical learning b ased on sim-\nilarity of the evolving goal state to the goal states from related proofs. Similarly,\nin the learning versions of leanCoP [26] – (FE)MaLeCoP [36,18] – the ta bleau\nextension steps are guided by a trained learner using similarity of the evolving\ntableau(the ATP proof search state ) tomanyothertableauxfromrelatedproofs.\nSuch intuitive and compact notion of proof search state is however hard to\nget when working with today’s high-performance saturation-style ATPs such as\nE [30] and Vampire [22]. The above definition of saturation-style proo f state\n(Section 2) as either one or two (processed/unprocessed) large sets of clauses is\nveryunfocused.Existinglearning-basedguidingmethodsforE[15,2 3]practically\nignore this. Instead, they use only the original conjecture and its features for\nselecting the relevant given clauses throughout the whole proof se arch.\nThis is obviously unsatisfactory, both when compared to the evolvin g search\nstate in the case of tableau and tactical proving, and also when com pared to the\nway humans select the next steps when they search for proofs. T he proof search\nstate in our mind is certainly an evolving concept based on the search done so\nfar, not a fixed set of features extracted just from the conjec ture.\n3.1 Proof Search State Representation for Guiding Saturati on\nOne of the motivations for the work presented here is to produce a n intuitive,\ncompactandevolvingheuristicrepresentationofproofsearchst ateinthecontext\nof learning-guided saturation proving. As usual, it should be a vecto r of (real-\nvalued) features that are either manually designed or learned. In a high-level\nway, our proposed representation is a vector expressing an abstract similarity\nof the search state to (possibly many) previous related proo fs. This can be im-\nplemented in different ways, using both statistical and symbolic meth ods and\ntheir combinations. An example and motivation comes again from the w ork of\nVeroff, where a search is considered promising when the given clause s frequently\nmatch hints. The gaps between the hint matchings may correspond to the more\nbrute-force bridges between the different proof ideas expresse d by the hints.\nOur first practical implementation introduced in Section 5 is to load up on\nthe search initialization Nrelated proofs Pi, and for each Pikeep track of the\nratio of the clauses from Pithat have already been subsumed during the search.\nThe subsumption checking is using E’s watchlist mechanism (Section 4) . The\nN-long vector pof suchproof completion ratios is our heuristic representation\nof the proof search state, which is both compact and typically evolv ing, making\nit suitable for both hard-coded and learned clause selection heurist ics.\nIn this work we start with fast hard-coded watchlist-style heurist ics for fo-\ncusing inferences on clauses that progress the more finished proo fs (Section 5).\nHowevertraininge.g.astatisticalENIGMA-style[15] clauseevaluat ionmodelby\naddingpto the currently used ENIGMA features is a straightforwardexte nsion.\n4 Static Watchlist Guidance and its Implementation in E\nE originally implemented a watchlist mechanism as a means to force direc t,\nconstructiveproofsinfirstorderlogic.Forthisapplication,thewa tchlistcontains\na number of goal clauses (corresponding to the hypotheses to be proven), and all\nnewlygeneratedandprocessedclausesarecheckedagainstthew atchlist.Ifoneof\nthewatchlistclausesissubsumedbyanewclause,theformerisremo vedfromthe\nwatchlist. The proof search is complete, once all clauses from the w atchlist have\nbeen removed. In contrast to the normal proof by contradiction , this mechanism\nis not complete. However, it is surprisingly effective in practice, and it produces\na proof by forward reasoning.\nIt was quickly noted that the basic mechanism of the watchlist can als o be\nused to implement a mechanism similar to the hintssuccessfully used to guide\nOtter [24] (and its successor Prover9 [25]) in a semi-interactive ma nner [37].\nHints in this sense are intermediate results or lemmas expected to be useful in a\nproof. However,they arenot providedas part ofthe logicalprem ises, but haveto\nbe derived during the proofsearch.While the hints are specified whe n the prover\nis started, they are only used to guide the proof search - if a clause matches a\nhint, it is prioritized for processing. If all clauses needed for a proo f are provided\nas hints, in theory the prover can be guided to prove a theorem with out any\nsearch, i.e. it can replaya previous proof. A more general idea, explored in this\npaper, is to fill the watchlist with a large number of clauses useful in p roofs of\nsimilar problems.\nIn E, the watchlist is loaded on start-up, and is stored in a feature v ector\nindex [29] that allowsforefficient retrievalofsubsumed (and subsu ming) clauses.\nBy default, watchlist clauses are simplified in the same way as process ed clauses,\ni.e. they are kept in normal form with respect to clauses in P. This increases the\nchance that a new clause (which is always simplified) can match a similar w atch-\nlist clause. If used to control the proof search, subsumed clause s can optionally\nremain on the watchlist.\nWe haveextended E’sdomain-specificlanguagefor searchheuristic swith two\npriority functions to access information about the relationship of c lauses to the\nwatchlist - the function PreferWatchlist gives higher rank to clauses that sub-\nsume at least one watchlist clause, and the dual function DeferWatchlist ranks\nthem lower. Using the first, we have also defined four built-in heurist ics that\npreferably process watchlist clauses. These include a pure watchlis t heuristic,\na simple interleaved watch list function (picking 10 out of every eleven clauses\nfrom the watchlist, the last using FIFO), and a modification of a stro ng heuristic\nobtained from a genetic algorithm [27] that interleaves several diffe rent evalu-\nation schemes and was modified to prefer watchlist clauses in two of it s four\nsub-evaluation functions.\n5 Dynamic Watchlist Guidance\nIn addition to the above mentioned static watchlist guidance , we propose and ex-\nperiment with an alternative: dynamic watchlist guidance . With dynamic watch-\nlist guidance, several watchlists, as opposed to a single watchlist, a re loaded on\nstart-up.Separatewatchlists aresupposedto groupclauseswh ich aremore likely\nto appear together in a single proof. The easiest way to produce wa tchlists with\nthis property is to collect previously proved problems and use their p roofs as\nwatchlists. This is our current implementation, i.e., each watchlist cor responds\nto a previous proof. During a proof search, we maintain for each wa tchlist its\ncompletion status , i.e. the number of clauses that were already encountered. The\nmain idea behind our dynamic watchlist guidance is to prefer clauses wh ich ap-\npear on watchlists that are closer to completion. Since watchlists no w exactly\ncorrespond to previous refutational proofs, completion of any w atchlist implies\nthat the current proof search is finished.\n5.1 Watchlist Proof Progress\nLet watchlists W1,...,Wnbe given for a proof search. For each watchlist Wiwe\nkeep awatchlist progress counter , denoted progress(Wi), which is initially set to\n0. Whenever a clause Cis generated during the proof search, we have to check\nwhether Csubsumes some clause from some watchlist Wi. WhenCsubsumes\na clause from Wiwe increase progress(Wi) by 1. The subsumed clause from\nWiis then marked as encountered, and it is not considered in future wa tchlist\nsubsumption checks.3Note that a single generated clause Ccan subsume several\nclauses from one or more watchlists, hence several progress cou nters might be\nincreased multiple times as a result of generating C.\n5.2 Standard Dynamic Watchlist Relevance\nThe easiest way to use progress counters to guide given clause sele ction is to as-\nsign the(standard) dynamic watchlist relevance to each generated clause C, de-\nnotedrelevance 0(C), as follows. Whenever Cis generated, we check it againstall\nthe watchlists for subsumption and we update watchlist progress c ounters. Any\nclauseCwhichdoesnotsubsumeanywatchlistclauseisgiven relevance 0(C) = 0.\nWhenCsubsumes some watchlist clause, its relevance is the maximum watchlis t\ncompletion ratio over all the matched watchlists. Formally, let us writ eC⊑Wi\nwhen clause Csubsumessomeclausefromwatchlist Wi.Fora clause Cmatching\nat least one watchlist, its relevance is computed as follows.\nrelevance 0(C) = max\nW∈{Wi:C⊑Wi}/parenleftBigprogress(W)\n|W|/parenrightBig\nThe assumption is that a watchlist Wthat is matched more is more relevant\nto the current proof search. In our current implementation, the relevance is\ncomputed at the time of generation of Cand it is not updated afterwards. As\nfuture work, we propose to also update the relevance of all gener ated but not yet\nprocessed clauses from time to time in order to reflect updates of t he watchlist\nprogresscounters.Notethat thisisexpensive,asthe numberof generatedclauses\nis typically high. Suitable indexing could be used to lower this cost or eve n to\ndo the update immediately just for the affected clauses.\nTo use the watchlist relevance in E, we extend E’s domain-specific lang uage\nfor search heuristics with two priority functions PreferWatchlistRelevant and\nDeferWatchlistRelevant . The first priority function ranks higher the clauses\nwith higherwatchlistrelevance4, andthe otherfunction doesthe opposite.These\npriority functions can be used to build E’s heuristics just like in the cas e of the\nstatic watchlist guidance. As a results, we can instruct E to proces s watchlist-\nrelevant clauses in advance.\n5.3 Inherited Dynamic Watchlist Relevance\nThe previous standard watchlist relevance prioritizes only clauses s ubsuming\nwatchlist clauses but it behaves indifferently with respect to other c lauses. In\n3Alternatively, the subsumed watchlist clause D∈Wican be considered for future\nsubsumption checks but the watchlist progress counter progress (Wi) should not be\nincreased when Dis subsumed again. This is because we want the progress count er\nto represent the number of different clauses from Wiencountered so far.\n4Technically, E’s priority function returns an integer prio rity, and clauses with smaller\nvalues are preferred. Hence we compute the priority as 1000 ∗(1−relevance 0(C)).\nordertoprovidesomeguidanceevenforclauseswhichdonotsubsu meanywatch-\nlist clause, we can examine the watchlist relevance of the parents of each gener-\natedclause,andprioritizeclauseswithwatchlist-relevantparents .Letparents(C)\ndenote the set of previously processed clauses from which Chave been derived.\nInherited dynamic watchlist relevance , denoted relevance 1, is a combination of\nthe standarddynamicrelevancewiththe averageofparentsrelev ancesmultiplied\nby adecayfactorδ <1.\nrelevance 1(C) =relevance 0(C)+δ∗avg\nD∈parents (C)/parenleftbig\nrelevance 1(D)/parenrightbig\nClearly, the inherited relevance equals to the standard relevance f or the initial\nclauses with no parents. The decay factor ( δ) determines the importance of par-\nents watchlist relevances.5Note that the inherited relevances of parents(C) are\nalready precomputed at the time of generating C, hence no recursive computa-\ntion is necessary.\nWith the above relevance 1we compute the average of parents inherited rel-\nevances, hence the inherited watchlist relevance accumulates rele vance of all the\nancestors. As a result, relevance 1(C) is greater than 0 if and only if Chas some\nancestor which subsumed a watchlist clause at some point. This might have an\nundesirable effect that clauses unrelated to the watchlist are comp letely ignored\nduring the proofsearch.In practice, however,it seems importan t to consideralso\nwatchlist-unrelated clauses with some degree in order to prove new conjectures\nwhich do not appear on the input watchlist. Hence we introduce two threshold\nparameters αandβwhich resets the relevance to 0 as follows. Let length(C)\ndenote the length of clause C, counting occurrences of symbols in C.\nrelevance 2(C) =/braceleftBigg\n0 iff relevance 1(C)< αandrelevance 1(C)\nlength(C)< β\nrelevance 1(C) otherwise\nParameter αis a threshold on the watchlist inherited relevance while βcombines\nthe relevance with the clause length.6As a result, shorter watchlist-unrelated\nclauses are preferred to longer (distantly) watchlist-related clau ses.\n5In our experiments, we use δ= 0.1\n6In our experiments, we use α= 0.03 andβ= 0.009. These values have been found\nuseful by a small grid search over a random sample of 500 probl ems.\n6 Experiments with Watchlist Guidance\nFor our experiments we construct watchlists from the proofs fou nd by E on\na benchmark of 57897 Mizar40 [19] problems in the MPTP dataset [35].7 8.\nThese initial proofs were found by an evolutionarily optimized [14] ens emble\nof 32 E strategies each run for 5 s. These are our baseline strategies. Due to\nlimited computational resources, we do most of the experiments wit h the top 5\nstrategies that (greedily) cover most solutions ( top 5 greedy cover ). These are\nstrategies number 2, 8, 9, 26 and 28, henceforth called A,B,C,D,E. In 5 s\n(in parallel) they together solve 21122 problems. We also evaluate th ese five\nstrategies in 10 seconds, jointly solving 21670 problems. The 21122 proofs yield\nover 100000 unique proof clauses that can be used for watchlist-b ased guidance\nin our experiments. We also use smaller datasets randomly sampled fr om the\nfull set of 57897 problems to be able to explore more methods. All pr oblems are\nrun on the same hardware9and with the same memory limits.\nEach E strategy is specified as a frequency-weighted combination o f parame-\nterizedclause evaluation functions (CEF) combined with a selection of inference\nrules. Below we show a simplified example strategy specifying the term order-\ningKBO, and combining (with weights 2 and 4) two CEFs made up of weight\nfunctions Clauseweight andFIFOWeight and priority functions DeferSOS and\nPreferWatchlist .\n-tKBO -H(2*Clauseweight(DeferSoS,20,9999,4),4*FIFOWe ight(PreferWatchlist))\n6.1 Watchlist Selection Methods\nWe have experimented with several methods for creation of static and dynamic\nwatchlists. Typically we use only the proofs found by a particular bas eline strat-\negy to construct the watchlists used for testing the guided versio n of that strat-\negy. Using all 100000+ proof clauses as a watchlist slows E down to 6 g iven\nclauses per second. This is comparable to the speed of Prover9 with similarly\nlarge watchlists, but there are indexing methods that could speed t his up. We\nhave run several smaller tests, but do not include this method in the evalua-\ntion due to limited computational resources. Instead, we select a s maller set of\nclauses. The methods are as follows:\n(art)Use all proof clauses from theorems in the problem’s Mizar article10. Such\nwatchlist sizes range from 0 to 4000, which does not cause any signifi cant\nslowdown of E.\n7Precisely, we have used the small ( bushy , re-proving) ver-\nsions, but without ATP minimization. They can be found at\nhttp://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/MPTP2/problems_small_consis t.tar.gz\n8Experimental results and code can be found at\nhttps://github.com/ai4reason/eprover-data/tree/mast er/ITP-18 .\n9Intel(R) Xeon(R) CPU E5-2698 v3 @ 2.30GHz with 256G RAM.\n10Excluding the current theorem.\n(freq)Use high-frequency proof clauses for static watchlists, i.e., clause s that ap-\npear in many proofs.\n(kNN-st) Usek-nearest neighbor ( k-NN) learning to suggest useful static watchlists\nfor each problem, based on symbol and term-based features [20] of the con-\njecture. This is very similar to the standard use of k-NN and other learners\nfor premise selection. In more detail, we use symbols, walks of length 2 on\nformula trees and common subterms (with variables and skolem symb ols\nunified). Each proof is turned into a multi-label training example, whe re the\nlabels are the (serially numbered) clauses used in the proof, and the features\nare extracted from the conjecture.\n(kNN-dyn) Usek-NN in a similar way to suggest the most related proofs for dynamic\nwatchlists. This is done in two iterations.\n(i)In the firstiteration,only the conjecture-basedsimilarityisused t oselect\nrelated problems and their proofs.\n(ii)The seconditerationthen uses datamined from the proofs obtaine dwith\ndynamicguidanceinthe firstiteration.Fromeachsuchproof Pwecreate\na training example associating P’s conjecture features with the names of\nthe proofs that matched (i.e., guided the inference of) the clauses needed\ninP. On this dataset we again train a k-NN learner, which recommends\nthe most useful related proofs for guiding a particular conjectur e.\n6.2 Using Watchlists in E Strategies\nAs described in Section 4, watchlist subsumption defines the PreferWatchlist\npriority function that prioritizes clauses that subsume at least one watchlist\nclause.Belowwedescribeseveralwaystousethis priorityfunction andthe newly\ndefined dynamic PreferWatchlistRelevant priority function and its relevance-\ninheriting modifications. Each of them can additionally take the “no-r emove”\noption, to keep subsumed watchlist clauses in the watchlist, allowing r epeated\nmatching by different clauses. Preliminary testing has shown that ju st adding a\nsingle watchlist-based clause evaluation function ( CEF) to the baseline CEFs11\nis not as good as the methods defined below. In the rest of the pape r we provide\nshort names for the methods, such as prefA(baseline strategy Amodified by the\nprefmethod described below).\n1.evo: the default heuristic strategy (Section 4) evolved (genetically [2 7]) for\nstatic watchlist use.\n2.pref:replaceallpriorityfunctionsinabaselinestrategywiththe PreferWatch-\nlistpriority function. The resulting strategies look as follows:\n-H(2*Clauseweight(PreferWatchlist,20,9999,4),\n4*FIFOWeight(PreferWatchlist))\n11Specifically we tried adding Defaultweight(PreferWatchli st) and ConjectureRela-\ntiveSymbolWeight(PreferWatchlist) with frequencies 1 ,2,5,10,20 times that of the\nrest of the CEFs in the strategy.\n3.const: replace all priority functions in a baseline strategy with ConstPrio ,\nwhich assignsthe same priorityto allclauses,soall rankingis doneby weight\nfunctions alone.\n4.uwl: always prefer clauses that match the watchlist, but use the base line\nstrategy’s priority function otherwise12.\n5.ska: modify watchlist subsumption in E to treat all skolem symbols of the\nsame arity as equal, thus widening the watchlist guidance. This can be used\nwith any strategy. In this paper it is used with pref.\n6.dyn:replaceallpriorityfunctionsinabaselinestrategywith PreferWatchlist-\nRelevant , which dynamically weights watchlist clauses (Section 5.2).\n7.dyndec: add the relevance inheritance mechanisms to dyn(Section 5.3).\n6.3 Evaluation\nFirst we measure the slowdown caused by larger static watchlists on the best\nbaseline strategyand a random sample of10000problems.The resu lts are shown\nin Table 1. We see that the speed significantly degrades with watchlist s of size\n10000, while 500-big watchlists incur only a small performance penalt y.\nSize 10 100 256 512 1000 10000\nproved 3275 3275 3287 3283 3248 2912\nPPS 8935 9528 8661 7288 4807 575\nTable 1. Tests of the watchlist size influence (ordered by frequency) on a random\nsample of 10000 problems using the ”no-remove” option and on e static watchlist with\nstrategy prefA . PPS is average processed clauses per second, a measure of E’ s speed.\nTable 2 shows the 10 s evaluation of several static and dynamic meth ods on\na random sample of 5000 problems using article-based watchlists (me thodart\nin Section 6.1). For comparison, E’s autostrategy proves 1350 of the problems\nin 10 s and its auto-schedule proves 1629. Given 50 seconds the auto-schedule\nproves 1744 problems compared to our top 5 cover’s 1964.\nThe first surprisingresult is that constsignificantly outperforms the baseline.\nThis indicates that the old-style simple E priority functions may do mor e harm\nthan good if they are allowed to override the more recent and sophis ticated\nweight functions. The skastrategy performs best here and a variety of strategies\nprovide better coverage. It’s interesting to note that skaandprefoverlap only\non 1893 problems. The original evostrategy performs well, but lacks diversity.\nTable3brieflyevaluates k-NNselectionofwatchlistclauses(method kNN-st\nin Section 6.1) on a single strategy prefA. Next we use k-NN to suggest watchlist\nproofs13(method kNN-dyn.i ) forprefanddyn. Table 4 evaluates the influence\nof the number of related proofs loaded for the dynamic strategies . Interestingly,\n12uwlis implemented in E’s source code as an option.\n13All clauses in suggested proofs are used.\nStrategy baseline const pref ska dyn evo uwl\nA 1238 1493 1503 1510 1500 1303 1247\nB 1255 1296 1315 1330 1316 1300 1277\nC 1075 1166 1205 1183 1201 1068 1097\nD 1102 1133 1176 1190 11751330 1132\nE 11381141 1141 1153 1139 1070 1139\ntotal 1853 1910 1931 1933 1922 1659 1868\nTable 2. Article-based watchlist benchmark. A top 5 greedy cover pro ves 1964 prob-\nlems (in bold).\nWatchlist size 16 64 256 1024 2048\nProved 1518 1531 1528 1532 1520\nTable 3. Evaluation of kNN-st on prefA\nprefoutperforms dynalmost everywhere but dyn’s ensemble of strategies A-E\ngenerally performs best and the top 5 cover is better. We conclude thatdyn’s\ndynamic relevance weighting allows the strategies to diversify more.\nTable 5 evaluates the top 5 greedy cover from Table 4 on the full Miza r\ndataset,alreadyshowingsignificantimprovementoverthe21670p roofsproduced\nby the 5 baseline strategies. Based on proof data from a full-run of the top-5\ngreedy cover in Table 5, new k-NN proof suggestions were made (me thodkNN-\ndyn.ii) anddyn’s grid search re-run, see Table 6 and Table 7 for k-NN round 2\nresults.\nWe also test the relevance inheriting dynamic watchlist feature ( dyndec),\nprimarily to determine if different proofs can be found. The results a re shown\nin Table 8. This version adds 8 problems to the top 5 greedy cover of a ll the\nstrategiesrunonthe 5000problemdataset,makingituseful in asc heduledespite\nlowerperformancealone.Table9showsthisgreedycover,andthe nits evaluation\non the full dataset. The 23192 problems proved by our new greedy cover is a 7%\nimprovement over the top 5 baseline strategies.\n7 Examples\nThe Mizar theorem YELLOW5:3614states De Morgan’s laws for Boolean lattices:\ntheorem Th36: :: YELLOW_5:36\nfor L being non empty Boolean RelStr for a, b being Element of L\nholds ( ’not’ (a \" ∨\" b) = (’not’ a) \" ∧\" (’not’ b)\n& ’not’ (a \" ∧\" b) = (’not’ a) \" ∨\" (’not’ b) )\nUsing 32 related proofs results in 2220 clauses placed on the watchlis ts. The\ndynamically guided proof search takes 5218 (nontrivial) given clause loops done\nin 2 s and the resulting ATP proof is 436 inferences long. There are 19 4 given\nclauses that match the watchlist during the proof search and 120 ( 61.8%) of\n14http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/yellow_5#T36\nsize dynA dynB dynC dynD dynE total\n4 1531 1352 1235 1194 1165 1957\n8 1543 1366 1253 1188 1170 1956\n16 1529 1357 1224 1218 1185 1951\n321546 1373 1240 1218 1188 1962\n64 15351376 1216 1215 1166 1935\n128 1506 1351 1195 1214 1147 1907\n1024 1108 963 710 943 765 1404\nsize prefA prefB prefC prefD prefE total\n4 1539 1369 1210 1220 1159 1944\n8 1554 1385 1219 1240 1168 1941\n161572 1405 1225 1254 1180 1952\n32 1568 1412 1231 1271 1190 1958\n64 1567 1402 1228 1262 1172 1952\n1281552 1388 1210 1248 1160 1934\n1024 1195 1061 791 991 806 1501\nTable 4. k-NN proof recommendation watchlists ( kNN-dyn.i ) fordyn pref . Size is\nnumber of proofs, averaging 40 clauses per proof. A top 5 gree dy cover of dynproves\n1972 and pref proves 1959 (in bold).\ndynA 32 dynC 8 dynD 16 dynE 4 dynB 64\nadded 17964 2531 1024 760 282\ntotal 17964 14014 14294 13449 16175\nTable 5. K-NN round 1 greedy cover on full dataset and proofs added by e ach suc-\ncessive strategy for a total of 22579. dynA 32 means strategy dynA using 32 proof\nwatchlists.\nsize dyn2A dyn2B dyn2C dyn2D dyn2E total round 1 total\n4 15391368 1235 1209 1179 1961 1957\n8 1554 1376 1253 1217 1183 1971 1956\n161565 13821256 1221 1181 1972 1951\n32 1557 1383 1252 1227 1182 1968 1962\n64 1545 1385 1244 1222 1171 1963 1935\n128 1531 1374 1221 1227 1171 1941 1907\nTable 6. Problems proved by round 2 k-NN proof suggestions ( kNN-dyn.ii ). The\ntop 5 greedy cover proves 1981 problems (in bold). dyn2A meansdynA run on the 2nd\niteration of k-NN suggestions.\ndyn2A 16 dyn2C 16 dyn2D 32 dyn2E 4 dyn2B 4\ntotal 18583 14486 14720 13532 16244\nadded 18583 2553 1007 599 254\nTable 7. K-NN round 2 greedy cover on full dataset and proofs added by e ach succes-\nsive strategy for a total of 22996\nsize dyndec2A dyndec2B dyndec2C dyndec2D dyndec2E total\n4 1432 1354 1184 1203 1152 1885\n16 1384 1316 1176 1221 1140 1846\n32 1381 1309 1157 1209 1133 1820\n128 1326 1295 1127 1172 1082 1769\nTable 8. Problems proved by round 2 k-NN proof suggestions with dyndec . The top 5\ngreedy cover proves 1898 problems (in bold).\ntotal dyn2A 16 dyn2C 16 dyndec2D 16 dyn2E 4 dyndec2A 128\n2007 1565 230 97 68 47\n23192 18583 2553 1050 584 422\n23192 18583 14486 14514 13532 15916\nTable 9. Top: Cumulative sum of the 5000 test set greedy cover. The k-N N based\ndynamic watchlist methods dominate, improving by 2 .1% over the baseline and article-\nbased watchlist strategy greedy cover of 1964 (Table 2). Bot tom: Greedy cover run on\nthe full dataset, cumulative and total proved.\nthem end up being part of the proof. I.e., 27.5% of the proof consist s of steps\nguided by the watchlist mechanism. The proof search using the same settings,\nbut without the watchlist takes 6550 nontrivial given clause loops (2 5.5% more).\nThe proof of the theorem WAYBEL1:8515is considerably used for this guidance:\ntheorem :: WAYBEL_1:85\nfor H being non empty lower-bounded RelStr st H is Heyting hol ds\nfor a, b being Element of H holds ’not’ (a \" ∧\" b) >= (’not’ a) \" ∨\" (’not’ b)\nNote that this proof is done under the weaker assumptions of H bein g lower\nbounded and Heyting, rather than being Boolean. Yet, 62 (80.5%) o f the 77\nclauses from the proof of WAYBEL1:85are eventually matched during the proof\nsearch. 38 (49.4%) of these 77 clauses are used in the proof of YELLOW5:36. In\nTable 10 we show the final state of proof progress for the 32 loade d proofs after\nthe last non empty clause matched the watchlist. For each we show b oth the\ncomputed ratio and the number of matched and all clauses.\nAn example of a theorem that can be proved in 1.2 s with guidance but\ncannot be proved in 10 s with any unguided method is the following theo rem\nBOOLEALG:6216about the symmetric difference in Boolean lattices:\nfor L being B_Lattice\nfor X, Y being Element of L holds (X \\+\\ Y) \\+\\ (X \" ∧\" Y) = X \" ∨\" Y\nUsing 32 related proofs results in 2768 clauses placed on the watchlis ts. The\nproof search then takes 4748 (nontrivial) given clause loops and th e watchlist-\nguided ATP proof is 633 inferences long. There are 613 given clauses that match\nthe watchlist during the proof search and 266 (43.4%) of them end u p being\npart of the proof. I.e., 42% of the proof consists of steps guided b y the watchlist\n15http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/waybel_1#T85\n16http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/boolealg#T62\n0 0.438 42/96 1 0.727 56/77 2 0.865 45/52 3 0.360 9/25\n4 0.750 51/68 5 0.259 7/27 6 0.805 62/77 7 0.302 73/242\n8 0.652 15/23 9 0.286 8/28 10 0.259 7/27 11 0.338 24/71\n12 0.680 17/25 13 0.509 27/53 14 0.357 10/28 15 0.568 25/44\n16 0.703 52/74 17 0.029 8/272 18 0.379 33/87 19 0.424 14/33\n20 0.471 16/34 21 0.323 20/62 22 0.333 7/21 23 0.520 26/50\n24 0.524 22/42 25 0.523 45/86 26 0.462 6/13 27 0.370 20/54\n28 0.411 30/73 29 0.364 20/55 30 0.571 16/28 31 0.357 10/28\nTable 10. Final state of the proof progress for the (serially numbered ) 32 proofs loaded\nto guide the proof of YELLOW5:36. We show the computed ratio and the number of\nmatched and all clauses.\nmechanism. Among the theorems whose proofs are most useful fo r the guidance\nare the following theorems LATTICES:2317,BOOLEALG:3318andBOOLEALG:5419\non Boolean lattices:\ntheorem Th23: :: LATTICES:23\nfor L being B_Lattice\nfor a, b being Element of L holds (a \" ∧\" b)‘ = a‘ \" ∨\" b‘\ntheorem Th33: :: BOOLEALG:33\nfor L being B_Lattice for X, Y being Element of L holds X \\ (X \" ∧\" Y) = X \\ Y\ntheorem :: BOOLEALG:54\nfor L being B_Lattice for X, Y being Element of L\nst X‘ \" ∨\" Y‘ = X \" ∨\" Y & X misses X‘ & Y misses Y‘\nholds X = Y‘ & Y = X‘\nFinally, we show several theorems20–23with nontrivial Mizar proofs and\nrelatively long ATP proofs obtained with significant guidance. These t heorems\ncannot be proved by any other method used in this work.\ntheorem :: BOOLEALG:68\nfor L being B_Lattice for X, Y being Element of L\nholds (X \\+\\ Y)‘ = (X \" ∧\" Y) \"∨\" ((X‘) \" ∧\" (Y‘))\ntheorem :: CLOSURE1:21\nfor I being set for M being ManySortedSet of I\nfor P, R being MSSetOp of M st P is monotonic & R is monotonic\nholds P ** R is monotonic\ntheorem :: BCIALG_4:44\nfor X being commutative BCK-Algebra_with_Condition(S)\nfor a, b, c being Element of X st Condition_S (a,b) c= Initial_ section c holds\nfor x being Element of Condition_S (a,b) holds x <= c \\ ((c \\ a) \\ b)\ntheorem :: XXREAL_3:67\nfor f, g being ext-real number holds (f * g)\"=(f\") * (g\")\n17http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/lattices#T23\n18http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/boolealg#T33\n19http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/boolealg#T54\n20http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/boolealg#T68\n21http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/closure1#T21\n22http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/bcialg_4#T44\n23http://grid01.ciirc.cvut.cz/ ~mptp/7.13.01_4.181.1147/html/xxreal_3#T67\n8 Related Work and Possible Extensions\nThe closest related work is the hint guidance in Otter and Prover9. O ur focus is\nhoweveronlargeITP-styletheorieswithlargesignaturesandhete rogeneousfacts\nand proofsspanning variousareasofmathematics. This motivates usingmachine\nlearning for reducing the size of the static watchlists and the impleme ntation of\nthe dynamic watchlist mechanisms. Several implementations of inter nal proof\nsearch guidance using statistical learning have been mentioned in Se ctions 1 and\n3. In both the tableau-based systems and the tactical ITP syste ms the statistical\nlearning guidance benefits from a compact and directly usable notion of proof\nstate, which is not immediately available in saturation-style ATP.\nBy delegating the notion of similarity to subsumption we are relying on f ast,\ncrisp and well-known symbolic ATP mechanisms. This has advantages a s well as\ndisadvantages.Comparedto the ENIGMA[15] andneural[23] sta tisticalguiding\nmethods, the subsumption-based notion of clause similarity is not fe ature-based\nor learned. This similarity relation is crisp and sparser compared to th e similar-\nity relations induced by the statistical methods. The proof guidanc e is limited\nwhen no derived clauses subsume any of the loaded proof clauses. T his can be\ncountered by loading a high number of proofs and widening (or softe ning) the\nsimilarity relation in various approximate ways. On the other hand, su bsump-\ntion is fast compared to the deep neural methods (see [23]) and en joys clear\nguarantees of the underlying symbolic calculus. For example, when a ll the (non\nempty) clauses from a loaded related proof have been subsumed in t he current\nproof search, it is clear that the current proof search is success fully finished.\nA clear novelty is the focusing of the proof search towards the (po ssibly im-\nplausible) inferencesneeded forcompletingthe loadedproofs.Exis tingstatistical\nguiding methods will fail to notice such opportunities, and the static watchlist\nguidance has no way of distinguishing the watchlist matchers that lea d faster to\nproof completion. In a way this mechanism resembles the feedback o btained by\nMonte Carlo exploration, where a seemingly statistically unlikely decisio n can\nbe made, based on many rollouts and averaging of their results. Ins tead, we rely\nhere on a database of previous proofs, similar to previously played a nd finished\ngames. The newly introduced heuristic proof search (proof progr ess) representa-\ntion may however enable further experiments with Monte Carlo guida nce.\n8.1 Possible Extensions\nSeveral extensions have been already discussed above. We list the most obvious.\nMore sophisticated progress metrics :The current proof-progress criterion\nmay be too crude. Subsuming all the initialclauses of a related proof is unlikely\nuntil the empty clause is derived. In general, a large part of a relate d proof may\nnot be needed once the right clauses in the “middle of the proof” are subsumed\nby the current proof search. A better proof-progress metric w ould compute the\nsmallest number of proof clauses that are still needed to entail the contradiction.\nThis is achievable, however more technically involved, also due to issue s such as\nrewriting of the watchlist clauses during the current proof search .\nClause re-evaluation based on the evolving proof relevance :As more\nand more watchlist clauses are matched, the proof relevance of th e clauses gen-\nerated earlier should be updated to mirror the current state. This is in general\nexpensive, so it could be done after each Ngiven clause loops or after a sig-\nnificant number of watchlist matchings. An alternative is to add corr esponding\nindexing mechanisms to the set of generated clauses, which will immed iately\nreorder them in the evaluation queues based on the proof relevanc e updates.\nMore abstract/approximate matching :Instead of the strict notion of sub-\nsumption, more abstract or heuristic matching methods could be us ed. An inter-\nesting symbolic method to consider is matching modulo symbol alignmen ts [9].\nA number of approximate methods are already used by the above me ntioned\nstatistical guiding methods.\nAdding statistical methods for clause guidance :Instead of using only\nhard-coded watchlist-style heuristics for focusing inferences, a statistical (e.g.\nENIGMA-style) clause evaluation model could be trained by adding th e vector\nof proof completion ratios to the currently used ENIGMA features .\n9 Conclusion\nTheportfolioofnewproofguidingmethodsdevelopedheresignifican tlyimproves\nE’s standard portfolio of strategies, and also the previous best se t of strategies\ninvented for Mizar by evolutionary methods. The best combination o f five new\nstrategiesrunin parallelfor10seconds(areasonablehammeringt ime) will prove\nover 7% more Mizar problems than the previous best combination of fi ve non-\nwatchlist strategies. Improvement over E’s standard portfolio is m uch higher.\nEven though we focus on developing the strongest portfolio rathe r than a single\nbest method, it is clear that the best guided versions also significant ly improve\nover their non-guided counterparts. This improvement for the be st new strategy\n(dyn2Aused with 16 most relevant proofs) is 26.5% (= 18583 /14693). These are\nrelatively high improvements in automated theorem proving.\nWe have shown that the new dynamic methods based on the idea of pr oof\ncompletionratiosimproveoverthestaticwatchlistguidance.We hav ealsoshown\nthat as usual with learning-based guidance, iterating the methods to produce\nmore proofs leads to stronger methods in the next iteration. The fi rst experi-\nments with widening the watchlist-based guidance by relatively simple in heri-\ntance mechanisms seem quite promising, contributing many new proo fs. A num-\nber of extensions and experiments with guiding saturation-style pr oving have\nbeen opened for future research. We believe that various extens ions of the com-\npact and evolving heuristic representation of saturation-style pr oof search as\nintroduced here will turn out to be of great importance for furthe r development\nof learning-based saturation provers.\n10 Acknowledgments\nWe thank Bob Veroff for many enlightening explanations and discussio ns of\nthe watchlist mechanisms in Otter and Prover9. His “industry-grad e” projects\nthat prove open and interesting mathematical conjectures with h ints and proof\nsketches have been a great sort of inspiration for this work.\nReferences\n1. J. Alama, T. Heskes, D. K¨ uhlwein, E. Tsivtsivadze, and J. Urban. Premise selection\nfor mathematics by corpus analysis and kernel methods. J. Autom. Reasoning ,\n52(2):191–213, 2014.\n2. A. A. Alemi, F. Chollet, N. E´ en, G. Irving, C. Szegedy, and J. Urban. DeepMath\n- deep sequence models for premise selection. In D. D. Lee, M. Sugiyama, U. V.\nLuxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Pro-\ncessing Systems 29: Annual Conference on Neural Informatio n Processing Systems\n2016, December 5-10, 2016, Barcelona, Spain , pages 2235–2243, 2016.\n3. L. Bachmair and H. Ganzinger. Rewrite-Based Equational T heorem Proving with\nSelection and Simplification. Journal of Logic and Computation , 3(4):217–247,\n1994.\n4. J. C. Blanchette, D. Greenaway, C. Kaliszyk, D. K¨ uhlwein , and J. Urban. A\nlearning-based fact selector for Isabelle/HOL. J. Autom. Reasoning , 57(3):219–\n244, 2016.\n5. J. C. Blanchette, C. Kaliszyk, L. C. Paulson, and J. Urban. Hammering towards\nQED.J. Formalized Reasoning , 9(1):101–148, 2016.\n6. T. Eiter and D. Sands, editors. LPAR-21, 21st International Conference on Logic\nfor Programming, Artificial Intelligence and Reasoning, Ma un, Botswana, May 7-\n12, 2017 , volume 46 of EPiC Series in Computing . EasyChair, 2017.\n7. M. F¨ arber and C. Kaliszyk. Random forests for premise sel ection. In C. Lutz and\nS. Ranise, editors, Frontiers of Combining Systems - 10th International Sympo-\nsium, FroCoS 2015, Wroclaw, Poland, September 21-24, 2015. Proceedings , volume\n9322 ofLecture Notes in Computer Science , pages 325–340. Springer, 2015.\n8. M. F¨ arber, C. Kaliszyk, and J. Urban. Monte Carlo tableau proof search. In\nL. de Moura, editor, Automated Deduction - CADE 26 - 26th International Con-\nference on Automated Deduction, Gothenburg, Sweden, Augus t 6-11, 2017, Proceed-\nings, volume 10395 of Lecture Notes in Computer Science , pages 563–579. Springer,\n2017.\n9. T. Gauthier and C. Kaliszyk. Matching concepts across HOL libraries. In S. M.\nWatt, J. H. Davenport, A. P. Sexton, P. Sojka, and J. Urban, ed itors,CICM’15 ,\nvolume 8543 of LNCS , pages 267–281. Springer, 2014.\n10. T. Gauthier, C. Kaliszyk, and J. Urban. TacticToe: Learn ing to reason with HOL4\ntactics. In Eiter and Sands [6], pages 125–143.\n11. A. Grabowski, A. Korni/suppress lowicz, and A. Naumowicz. Mizar i n a nutshell. J. For-\nmalized Reasoning , 3(2):153–245, 2010.\n12. T. Gransden, N. Walkinshaw, and R. Raman. SEPIA: search f or proofs using\ninferred automata. In Automated Deduction - CADE-25 - 25th International Con-\nference on Automated Deduction, Berlin, Germany, August 1- 7, 2015, Proceedings ,\npages 246–255, 2015.\n13. J. Jakubuv and J. Urban. Extending E prover with similari ty based clause selection\nstrategies. In M. Kohlhase, M. Johansson, B. R. Miller, L. de Moura, and F. W.\nTompa, editors, Intelligent Computer Mathematics - 9th International Conf erence,\nCICM 2016, Bialystok, Poland, July 25-29, 2016, Proceeding s, volume 9791 of\nLecture Notes in Computer Science , pages 151–156. Springer, 2016.\n14. J. Jakubuv and J. Urban. BliStrTune: hierarchical inven tion of theorem proving\nstrategies. In Y. Bertot and V. Vafeiadis, editors, Proceedings of the 6th ACM\nSIGPLAN Conference on Certified Programs and Proofs, CPP 201 7, Paris, France,\nJanuary 16-17, 2017 , pages 43–52. ACM, 2017.\n15. J. Jakubuv and J. Urban. ENIGMA: efficient learning-based inference guiding\nmachine. In H. Geuvers, M. England, O. Hasan, F. Rabe, and O. T eschke, editors,\nIntelligent Computer Mathematics - 10th International Con ference, CICM 2017,\nEdinburgh, UK, July 17-21, 2017, Proceedings , volume 10383 of Lecture Notes in\nComputer Science , pages 292–302. Springer, 2017.\n16. C. Kaliszyk, S. Schulz, J. Urban, and J. Vyskocil. System description: E.T. 0.1.\nIn A. P. Felty and A. Middeldorp, editors, Automated Deduction - CADE-25 -\n25th International Conference on Automated Deduction, Ber lin, Germany, August\n1-7, 2015, Proceedings , volume 9195 of Lecture Notes in Computer Science , pages\n389–398. Springer, 2015.\n17. C. Kaliszyk and J. Urban. Learning-assisted automated r easoning with Flyspeck.\nJ. Autom. Reasoning , 53(2):173–213, 2014.\n18. C. Kaliszyk and J. Urban. FEMaLeCoP: Fairly efficient mach ine learning connec-\ntion prover. In M. Davis, A. Fehnker, A. McIver, and A. Voronk ov, editors, Logic\nfor Programming, Artificial Intelligence, and Reasoning - 2 0th International Con-\nference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015, Pr oceedings , volume\n9450 ofLecture Notes in Computer Science , pages 88–96. Springer, 2015.\n19. C. Kaliszyk and J. Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning , 55(3):245–\n256, 2015.\n20. C. Kaliszyk, J. Urban, and J. Vyskoˇ cil. Efficient semanti c features for automated\nreasoning over large theories. In Q. Yang and M. Wooldridge, editors,IJCAI’15 ,\npages 3084–3090. AAAI Press, 2015.\n21. M. K. Kinyon, R. Veroff, and P. Vojtechovsk´ y. Loops with a belian inner mapping\ngroups: An application of automated deduction. In M. P. Bona cina and M. E.\nStickel, editors, Automated Reasoning and Mathematics - Essays in Memory of\nWilliam W. McCune , volume 7788 of LNCS , pages 151–164. Springer, 2013.\n22. L. Kov´ acs and A. Voronkov. First-order theorem proving and Vampire. In\nN. Sharygina and H. Veith, editors, CAV , volume 8044 of LNCS , pages 1–35.\nSpringer, 2013.\n23. S. M. Loos, G. Irving, C. Szegedy, and C. Kaliszyk. Deep ne twork guided proof\nsearch. In Eiter and Sands [6], pages 85–105.\n24. W. McCune and L. Wos. Otter: The CADE-13 Competition Inca rnations. Journal\nof Automated Reasoning , 18(2):211–220, 1997. Special Issue on the CADE 13 ATP\nSystem Competition.\n25. W. W. McCune. Prover9 and Mace4. http://www.cs.unm.edu/ ~mccune/prover9/ ,\n2005–2010. (acccessed 2016-03-29).\n26. J. Otten and W. Bibel. leanCoP: lean connection-based th eorem proving. J. Symb.\nComput. , 36(1-2):139–161, 2003.\n27. S. Sch¨ afer and S. Schulz. Breeding theorem proving heur istics with genetic algo-\nrithms. In G. Gottlob, G. Sutcliffe, and A. Voronkov, editors ,Global Conference\non Artificial Intelligence, GCAI 2015, Tbilisi, Georgia, Oc tober 16-19, 2015 , vol-\nume 36 of EPiC Series in Computing , pages 263–274. EasyChair, 2015.\n28. S. Schulz. Learning Search Control Knowledge for Equati onal Theorem Proving. In\nF. Baader, G. Brewka, and T. Eiter, editors, Proc. of the Joint German/Austrian\nConference on Artificial Intelligence (KI-2001) , volume 2174 of LNAI , pages 320–\n334. Springer, 2001.\n29. S. Schulz. Simple and Efficient Clause Subsumption with Fe ature Vector Indexing.\nIn M. P. Bonacina and M. E. Stickel, editors, Automated Reasoning and Mathe-\nmatics: Essays in Memory of William W. McCune , volume 7788 of LNAI , pages\n45–67. Springer, 2013.\n30. S. Schulz. System description: E 1.8. In K. L. McMillan, A . Middeldorp, and\nA. Voronkov, editors, LPAR , volume 8312 of LNCS , pages 735–743. Springer, 2013.\n31. S. Schulz and M. M¨ ohrmann. Performance of clause select ion heuristics for\nsaturation-based theorem proving. In N. Olivetti and A. Tiw ari, editors, Proc.\nof the 8th IJCAR, Coimbra , volume 9706 of LNAI , pages 330–345. Springer, 2016.\n32. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. v an den Driessche,\nJ. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lan ctot, S. Dieleman,\nD. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. P. Lilli crap, M. Leach,\nK. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the g ame of go with\ndeep neural networks and tree search. Nature , 529(7587):484–489, 2016.\n33. D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M . Lai, A. Guez, M. Lanctot,\nL. Sifre, D. Kumaran, T. Graepel, T. P. Lillicrap, K. Simonya n, and D. Hassabis.\nMastering chess and shogi by self-play with a general reinfo rcement learning algo-\nrithm.CoRR , abs/1712.01815, 2017.\n34. K. Slind and M. Norrish. A brief overview of HOL4. In O. A. M ohamed, C. A.\nMu˜ noz, and S. Tahar, editors, Theorem Proving in Higher Order Logics, 21st In-\nternational Conference, TPHOLs 2008, Montreal, Canada, Au gust 18-21, 2008.\nProceedings , volume 5170 of LNCS , pages 28–32. Springer, 2008.\n35. J. Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom.\nReasoning , 37(1-2):21–43, 2006.\n36. J. Urban, J. Vyskoˇ cil, and P. ˇStˇ ep´ anek. MaLeCoP: Machine learning connection\nprover. In K. Br¨ unnler and G. Metcalfe, editors, TABLEAUX , volume 6793 of\nLNCS , pages 263–277. Springer, 2011.\n37. R. Veroff. Using hints to increase the effectiveness of an a utomated reasoning\nprogram: Case studies. Journal of Automated Reasoning , 16(3):223–239, 1996.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0fpw5bOU_82", "year": null, "venue": "EC 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=0fpw5bOU_82", "arxiv_id": null, "doi": null }
{ "title": "Prophet Secretary: Surpassing the 1-1/e Barrier", "authors": [ "Yossi Azar", "Ashish Chiplunkar", "Haim Kaplan" ], "abstract": "In the Prophet Secretary problem, samples from a known set of probability distributions arrive one by one in a uniformly random order, and an algorithm must irrevocably pick one of the samples as soon as it arrives. The goal is to maximize the expected value of the sample picked relative to the expected maximum of the distributions. This is one of the most simple and fundamental problems in online decision making that models the process selling one item to a sequence of costumers. For a closely related problem called the Prophet Inequality where the order of the random variables is adversarial, it is known that one can achieve in expectation 1/2 of the expected maximum, and no better ratio is possible. For the Prophet Secretary problem, that is, when the variables arrive in a random order, Esfandiari et al. (2015) showed that one can actually get 1-1/e of the maximum. The 1-1/e bound was recently extended to more general settings by Ehsani et al. (2018). Given these results, one might be tempted to believe that 1-1/e is the correct bound. We show that this is not the case by providing an algorithm for the Prophet Secretary problem that beats the 1-1/e bound and achieves 1-1/e+1/400 times the expected maximum. We also prove a hardness result on the performance of algorithms under a natural restriction which we call deterministic distribution-insensitivity.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lGKwu4FWyC", "year": null, "venue": "IEEE Trans. Pattern Anal. Mach. Intell. 2023", "pdf_link": "https://ieeexplore.ieee.org/iel7/34/10036240/09816125.pdf", "forum_link": "https://openreview.net/forum?id=lGKwu4FWyC", "arxiv_id": null, "doi": null }
{ "title": "E$^{3}$3Outlier: a Self-Supervised Framework for Unsupervised Deep Outlier Detection", "authors": [ "Siqi Wang", "Yijie Zeng", "Guang Yu", "Zhen Cheng", "Xinwang Liu", "Sihang Zhou", "En Zhu", "Marius Kloft", "Jianping Yin", "Qing Liao" ], "abstract": "Existing unsupervised outlier detection (OD) solutions face a grave challenge with surging visual data like images. Although deep neural networks (DNNs) prove successful for visual data, deep OD remains difficult due to OD's unsupervised nature. This paper proposes a novel framework named <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">E<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mn>3</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"wang-ieq2-3188763.gif\"/></alternatives></inline-formula>Outlier</i> that can perform <bold xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">e</b> ffective and <bold xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">e</b> nd-to- <bold xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">e</b> nd deep outlier removal. Its core idea is to introduce <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">self-supervision</i> into deep OD. Specifically, our major solution is to adopt a discriminative learning paradigm that creates multiple pseudo classes from given unlabeled data by various data operations, which enables us to apply prevalent discriminative DNNs (e.g., ResNet) to the unsupervised OD problem. Then, with theoretical and empirical demonstration, we argue that inlier priority, a property that encourages DNN to prioritize inliers during self-supervised learning, makes it possible to perform end-to-end OD. Meanwhile, unlike frequently-used outlierness measures (e.g., density, proximity) in previous OD methods, we explore network uncertainty and validate it as a highly effective outlierness measure, while two practical score refinement strategies are also designed to improve OD performance. Finally, in addition to the discriminative learning paradigm above, we also explore the solutions that exploit other learning paradigms (i.e., generative learning and contrastive learning) to introduce self-supervision for <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">E<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mn>3</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"wang-ieq3-3188763.gif\"/></alternatives></inline-formula>Outlier</i> . Such extendibility not only brings further performance gain on relatively difficult datasets, but also enables <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">E<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mn>3</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"wang-ieq4-3188763.gif\"/></alternatives></inline-formula>Outlier</i> to be applied to other OD applications like video abnormal event detection. Extensive experiments demonstrate that <italic xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">E<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mn>3</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"wang-ieq5-3188763.gif\"/></alternatives></inline-formula>Outlier</i> can considerably outperform state-of-the-art counterparts by 10%-30% AUROC. Demo codes are available at <uri xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">https://github.com/demonzyj56/E3Outlier</uri> .", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iNQUeB_cFq6", "year": null, "venue": "Bulletin of the EATCS2013", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/206/200", "forum_link": "https://openreview.net/forum?id=iNQUeB_cFq6", "arxiv_id": null, "doi": null }
{ "title": "When is A=B?", "authors": [ "Anja Gruenheid", "Donald Kossmann", "Besmira Nushi" ], "abstract": "Most database operations such as sorting, grouping and computing joins are based on comparisons between two values. Traditional algorithms assume that machines do not make mistakes. This assumption holds in traditional computing environments; however, it does not hold in several new emerging computing environments. In this write-up, we argue the need for new resilient algorithms that take into account that the result of a comparison might be wrong. The goal is to design algorithms that have low cost (make few comparisons) yet produce high-quality results in the presence of errors.", "keywords": [], "raw_extracted_content": "TheLogic in Computer Science Column\nby\nYuriGurevich\nMicrosoft Research\nOne Microsoft Way, Redmond WA 98052, USA\[email protected]\nWhen is A=B?\u0003\nAnja Gruünheid\nETH Zürich\[email protected] Kossmann\nETH Zürich\[email protected] Nushi\nETH Zürich\[email protected]\nAbstract\nMost database operations such as sorting, grouping and computing joins\nare based on comparisons between two values. Traditional algorithms as-\nsume that machines do not make mistakes. This assumption holds in tra-\nditional computing environments; however, it does not hold in several new\nemerging computing environments. In this write-up, we argue the need for\nnew resilient algorithms that take into account that the result of a compar-\nison might be wrong. The goal is to design algorithms that have low cost\n(make few comparisons) yet produce high-quality results in the presence of\nerrors.\n\u0003This write-up is based on a talk given at the University of Washington and Mi-\ncrosoft Research, Redmond, in August 2013. The slides of that talk are online at\nhttp://systems.ethz.ch/talks .\n1 Introduction\nWhen we think about computers, we typically assume that they are dumb and\nmake no mistakes. Our software methodology, complexity theory, and algorithmic\ndesign are based on these two assumptions. What happens if we drop one of these\nassumptions? What happens if computers start making mistakes occasionally;\neven simple mistakes such as getting a comparison between two integers wrong?\nWill we need new algorithms or are the existing algorithms good enough?\nThere are a number of research trends that make it worthwhile to think about\nerror-prone computers. The first trend is the emergence of crowdsourcing and\nthe development of hybrid systems that involve machines and humans to compute\ntasks that neither machines nor humans are capable of computing alone. [4] gives\nan overview of such systems. Since these systems rely on human input, some of\nthe computations carried out by these systems may be error-prone and algorithms\nthat are designed for these systems need to take this fact into account.\nA second trend is the development of new, low-energy processors that trade\npower for accuracy. That is, these processors might occasionally get an opera-\ntion wrong in exchange for much lower power consumption. Examples for such\ndesigns are [1].\nThird, with the advent of Big Data technologies, we are automating an in-\ncreasing number of tasks based on previous experience. Recommendation sys-\ntems such as those deployed by Amazon as part of their online shop improve with\nan increasing amount of data. Putting it di \u000berently, these systems might make\npoor recommendations if only little data is available.\nBased on these observations, we believe that it is worthwhile to revisit exist-\ning algorithms and start thinking about how to design algorithms for computer\nsystems that occasionally do make errors. It turns out that an algorithm that is\noptimal in the traditional (error-free) computational model may perform poorly in\nthe presence of error. As an example, this paper reports on some simple obser-\nvations that we made when studying QuickSort. Furthermore, this paper reports\non some observations on how to group objects in a robust way if the machine oc-\ncasionally misclassifies two objects. These two examples indicate that we might\nhave to rethink complexity theory and algorithm design. As of now, the results of\nthis paper are anecdotal, and we have not been yet able to develop a new theory.\nThe main purpose of this paper is to raise the issue.\nThe remainder of this paper is organized as follows: Section 2 studies sort-\ning. Section 3 gives an example of how errors impact algorithms for grouping or\nclustering objects. Section 4 contains conclusions and related work.\n2 Example 1: QuickSort\nTo show how the presence of errors may impact algorithm design, let us start\nwith a discussion of QuickSort. [7] gives a more general discussion of sorting\nalgorithms in the presence of errors. Here and in the remainder of this paper,\nwe assume a computational model in which the computer system might make an\nerror when executing a comparison; however, the logic of the algorithm is exe-\ncuted correctly. Furthermore, we assume that comparisons are the most expensive\noperation. This computational model matches nicely hybrid systems in which\ncomparisons are crowdsourced (e.g., [10]).\nQuickSort is generally perceived as one of the best algorithms for sorting.\nHowever, what makes QuickSort great for traditional, error-free computing sce-\nnarios, hurts QuickSort in the presence of mistakes. The following example shows\nwhy. The task is to sort the following sequence of numbers:\n7;24;2;13;51\nLet us assume that QuickSort chooses 7 as the pivot element of the first partition-\ning phase and let us furthermore assume that the machine gets all comparisons\nright in the first partitioning, except for the comparison 7 <51. As a consequence,\nthe result of the first iteration of QuickSort are the following two partitions:\n2;51\n24;13\nEven if the machine is perfect and makes no further mistakes, the best possible\noutcome to sort the sequence of numbers is:\n2;51;7;13;24\nThe key observation is that one wrong comparison (misclassifying 7 <51) re-\nsulted in three errors in the final result (misclassifying 13 <51 and 24 <51 in\naddition to 7 <51). The reason is that the QuickSort algorithm aggressively ex-\nploits the transitivity of the <relation so that errors propagate. There are many\ndi\u000berent notions of error and the most appropriate definition depends on the utility\nfunction of the application. We use the number of misclassified comparisons in\nthe final result here and in [7] because it is easy to formalize and it is a metric that\nis highly relevant for many applications that involve sorting or ranking data.\nIt turns out that it is di \u000ecult to fix QuickSort. The most natural way to improve\nthe quality of the result is to avoid misclassifications by repeating the computa-\ntion. That is, recomputing 7 <51 several times and then do a majority vote or\naccept based on a theshold. That will increase the number of comparisons by a\nTransactionId Customer Purchase\n1 Jane $ 1000\n2 Bob $ 500\n3 Jane $ 100\n4 Jane $ 50\nTable 1: Example Transactions\nconstant factor (i.e., the number of times each comparison is made) so that Quick-\nSort continues to be in the O(n\u0003log(n)) complexity class. The problem is that even\nwith a high number of attempts, the probability of a misclassification is not zero.\nSo, we can never expect perfection with QuickSort. Also, the impact of a wrong\ncomparison grows with the size of the sequence in our particular error model that\ncounts the misclassifications in the final result: In the worst case, it is n=2 with\nnthe length of the sequence. The question then is how to best invest additional\ncomparisons and whether new algorithms are more appropriate than traditional\nalgorithms to achieve high quality for lower cost. [7], for instance, shows that\niteratively running BubbleSort might be a better a way to invest additional com-\nputation for better quality. That is, do an intial sorting with QuickSort and then\nrun BubbleSort once or several times on the result to improve the quality of the\nresult, thereby exploiting that BubbleSort has O(n) complexity if the data is sorted\nalready and the a \u000bects of wrong comparisons are always local in BubbleSort.\n3 Example 2: Grouping\n3.1 Vote Graphs\nAs a second example of how the cost /quality trade-o \u000bof error-prone computer\nsystems impacts algorithm design, consider the list of transactions of Table 1.\nThe task is to compute the total purchase of each customer; i.e., $ 1150 for Jane\nand $ 500 for Bob. With SQL, this task can be specified using a simple GROUP\nBY clause. Depending on the number of customers, the number of transactions,\nand the skew in the distribution of transactions to customers, modern database sys-\ntems choose one of three alternative ways to compute this grouping of transactions\nby customer: sorting, hashing, or nested-loops. For the purpose of this example,\nwe will use the nested-loop variant and discuss alternative ways to compare the\ncustomer fields of two transactions in order to decide whether they belong to the\nsame customer. Note that hashing and sorting are often more e \u000ecient variants, but\nthey su \u000ber from the same kind of error propagation as the QuickSort algorithm in\nthe previous section.\n(a) No Conflict (b) With Conflict\nFigure 1: Example V ote Graphs for Table 1\nSimilarly to Section 2, we assume that the comparison of two customer names\nis the only error-prone and costly operation. Thus, the goal is to minimize the\nnumber of such comparisons and minimize the impact of mistakes made when\ncomputing these comparisons. Figure 1 illustrates one possible approach to do\nthat. It depicts two example Vote Graphs . Such V ote Graphs capture the results of\nall comparisons carried out between the customer names of the four transactions.\nThe nodes of a V ote Graph are transactions. Edges of a V ote Graph represent\nthe results of comparing the customer names of two transactions. The weight of\nan edge indicates how often the comparison returned that results; the sign of the\nweights of an edge indicates the result of the comparison (true or false).\nThe V ote Graph of Figure 1a, for instance, indicates that we compared three\ntimes the customer names of Transactions 1 and 2 (i.e., “Jane =Bob”) and all\nthree times the answer was “false” (which happens to be the correct answer in this\nexample). Furthermore, it shows that all seven comparisons between the customer\nnames of Transactions 1 and 3 were positive (which happens to be correct, too, in\nthis example).\n3.2 Decision Functions\nIf minimizing comparisons between two customer names is our main objective\n(e.g., because they need to be crowdsourced or need to be executed repeatedly on\nan error-prone machine), then it makes sense to exploit the transitivity of the =\nrelation. So, if the grouping algorithm asks whether Transactions 1 and 4 belong\nto the same customer in Figure 1a, the answer is true and can be inferred from\nFigure 1a without actually looking at the customer names of these transactions.\nTransitivity and anti-transitivity can be applied in a straightforward way in the\nexample of Figure 1a. The situation becomes trickier in Figure 1b because in that\nV ote Graph there is a conflicting edge: The negative edge between Transactions 1\nand 4 conflicts with the positive edges, “1-3” and “1-4”.\nIn the presence of error-prone computations, conflicts in the V ote Graph are\ninevitable. Therefore, it is important to tolerate these errors and make decisions\neven in conflict situations. In the example of Figure 1b, it is evident that the\nsystem should conclude that the same customer carried out Transactions 1 and 4\nbecause the weight of the edges “1-3” and “3-4” is much higher than the weight of\nthe negative edge “1-4”. In general, we propose the use of a decision function that\ngiven a V ote Graph, determines whether two nodes are the same, not the same, or\nif additional comparisons are needed in order to make the decision.\nThere are many decisions functions conceivable and [8] contains a more de-\ntailed discussion of which properties a decision function should have. For in-\nstance, a decision function that always says that two nodes are the same is obvi-\nously not good because it will result in poor quality . Likewise, a decision function\nthat always says “I do not know” is not good because it will result in high costas it\nwould induce additional comparisons. For the discussion in this paper, let us con-\nsider a decision function that is inspired by work on combining scoring functions\n[5] and that we call the MinMax function.\nThe MinMax function considers all positive and negative paths between two\nnodes. A positive path is a path that involves only edges with weight greater than\n0. A negative path is a path that has exactly one negative edge. Paths with more\nthan one negative edge are ignored because neither equality nor inequality can be\ninferred from them. For each path, the MinMax function computes a score: For\na positive path, the score is the minimum of the weights of the edges of the path.\nFor a negative path, the score is defined as the minimum of the absolute weights\nof the edges (i.e., the weight of the only negative edge is multiplied by -1 for this\npurpose). The intuition behind this scoring function is that a path is as strong as\nits weakest link. Another way to interpret the minimum is that it implements a\nconjunction (i.e.,^) along the path, thereby interpreting each edge as a predicate.\nContinuing the example of Figure 1b, the score of the positive path ’1-3-4’ is\n5 while the score for the negative path ’1-4’ is 1.\nAfter computing the scores for all positive and negative paths, the MinMax\ndecision function aggregates these scores into a single positive score, pScore , and\na single negative score, nScore .pScore is the maximum of the scores of all pos-\nitive paths. If there is no positive path, then pScore =0. Analogously, nScore is\nthemaximum of the scores of all negative paths. If there is no negative path, then\nnScore =0. These values represent the maximum impact that a positive respec-\ntively negative path can have within an entity.\nFinally, the MinMax function uses a threshold qin order to form a final deci-\nsion based on the positive and negative scores; e.g., q=3. That is, if the positive\nscore is 3 or more higher than the negative score then the MinMax function de-\ncides that the two nodes are the same. More formally, the decision part of MinMax\nis defined as follows.\nFigure 2: Interesting MinMax Example\nf(r1;r2)=8>>>>><>>>>>:Yes, pS core (r1;r2)\u0000nS core (r1;r2)\u0015q\nNo, nS core (r1;r2)\u0000pS core (r1;r2)\u0015q\nDo-not-know, otherwise\n3.3 Observations\n[8] contains a full discussion of this grouping /clustering use case under uncer-\ntainty with a series of experiments. The important observation and conclusion of\n[8] is that maintaining a V ote Graph and doing inference with the MinMax func-\ntion is much better than doing pairwise comparisons in terms of both quality and\ncost in order to compute any database operation that is based on equality (e.g.,\njoins, grouping, or clustering). In terms of cost, it is better because of its infer-\nence capability; in terms of quality, it is better because it detects inconsistencies\nand tries to keep the whole graph consistent. The designers of traditional database\nsystems would never consider keeping such a V ote Graph because it is in tradi-\ntional computing environments it is always cheaper (and as reliable) to recompute\na comparison than to infer its result from a V ote Graph.\n[8] discusses some of the properties of the MinMax decision function. It turns\nout that it is not transitive and an example can be seen in Figure 2 with a threshold\nof 3. In that example, the MinMax rules that “X =Y” (pScore =3, nScore =0) and\n“Y=Z” (pScore =5, nScore =2)), but it rules that “X =Z” is unknown (pScore =3,\nnScore =2). There are many conceivable decision functions; many which indeed\nare transitive. For instance, it would be possible to define a decision function by\napplying the MinCuts algorithm on every instance of the V ote Graph (i.e., after\ncomputing every comparison). This decision would indeed be transitive, but its\nimplementation would have high computational cost. [8] proposes the MinMax\nfunction because it can be implemented in a highly e \u000ecient way.\nFor the purpose of designing good and robust algorithms for error-prone com-\nputer systems, however, we would like to make another important, somewhat sur-\nprising observation. Going back to Figure 2 and using the MinMax function, the\nbest way to conclude that “X =Z” is notby comparing “X =Z” directly. Doing\nso would require, in the best case, five calls to the comparison function. Instead,\ninvesting into the “Y =Z” edge is more promising: In the best case, two compar-\nisons that confirm that indeed “Y =Z” are su \u000ecient to finally conclude with the\nMinMax function that “X =Z”.\n4 Conclusion and Related Work\nThe two examples showed some phenomena that may occur if computer systems\nmake mistakes. The examples show that an optimal algorithm for the traditonal\n(error-free) computing model might result in poor quality when run on error-prone\ncomputer systems. It is an open question of what the optimal algorithms to sort\na sequence of numbers and to group /cluster objects in the presence of errors are.\nThe main message that we would like to illustrate with these examples is that error\nshould be part of the equation. That is, we need to do two things:\n\u000fWe need to design algorithms that scale (with the problem size) and tolerate\nerrors. (Traditional algorithms were designed only to scale.)\n\u000fWe need to optimize for both costandquality . (Traditional algorithms were\ndesigned to minimize cost only.)\nIn other words, algorithm designers face two kinds of optimizations:\n\u000fGiven a problem (e.g., sorting), a problem instance (e.g., 1000 integers), an\nerror model (e.g., 1% of the comparisons are wrong uniformly) and a budget\n(e.g., 1 million comparisons), maximize the quality of the result.\n\u000fGiven a problem, a problem instance, an error model, and quality require-\nments, minimize the cost.\nAt the moment, we do not even have good abstractions to characterize computa-\ntional error and result quality.\nThe examples used in this paper were derived from typical database operators\n(i.e., sorting, joins, and grouping). Recently, there have a number of papers in the\ndatabase community that studied how to enhance database with crowdsourcing, a\nspecial form of uncertain computation; e.g., [9, 11, 6, 3] to name just a few. It turns\nout that the topic of error-prone computing has been studied in other communi-\nties as well and not only in the context of crowdsourcing. For instance, Busse and\nBuhmann studied the information gain of a comparison in alternative sorting algo-\nrithms [2]. Schulze developed a method to carry out elections, called the Schulze\nmethod, which is similar to the MinMax decision function [12]. Furthermore, de-\nsigners of distributed systems have been developing fault-tolerant algorithms for\ndecades. The fact that several communities are looking into fault-tolerant com-\nputation makes it even more important to develop a theory that incorporates error\nand result quality in algorithm design and complexity.\nReferences\n[1] L. Avinash, K. K. Muntimadugu, C. C. Enz, R. M. Karp, K. V . Palem, and C. Piguet.\nAlgorithmic methodologies for ultra-e \u000ecient inexact architectures for sustaining\ntechnology scaling. In J. Feo, P. Faraboschi, and O. Villa, editors, Conf. Computing\nFrontiers , pages 3–12. ACM, 2012.\n[2] L. M. Busse, M. H. Chehreghani, and J. M. Buhmann. The information content in\nsorting algorithms. In ISIT, pages 2746–2750. IEEE, 2012.\n[3] S. B. Davidson, S. Khanna, T. Milo, and S. Roy. Using the crowd for top-k and\ngroup-by queries. In W.-C. Tan, G. Guerrini, B. Catania, and A. Gounaris, editors,\nICDT , pages 225–236. ACM, 2013.\n[4] A. Doan, R. Ramakrishnan, and A. Y . Halevy. Crowdsourcing systems on the world-\nwide web. Commun. ACM , 54(4):86–96, 2011.\n[5] R. Fagin and E. L. Wimmers. A formula for incorporating weights into scoring rules.\nTheor. Comput. Sci. , 239(2):309–338, 2000.\n[6] M. J. Franklin, D. Kossmann, T. Kraska, S. Ramesh, and R. Xin. Crowddb: answer-\ning queries with crowdsourcing. In Sellis et al. [13], pages 61–72.\n[7] A. Gruenheid and D. Kossmann. Cost and quality trade-o \u000bs in crowdsourcing. In\nR. Cheng, A. D. Sarma, S. Maniu, and P. Senellart, editors, DBCrowd , volume 1025\nofCEUR Workshop Proceedings , pages 43–46. CEUR-WS.org, 2013.\n[8] A. Gruenheid, D. Kossmann, S. Ramesh, and F. Widmer. Crowdsourcing entity\nresolution: When is a =b? Technical Report No. 785, Department of Computer\nScience, ETH Zurich, Sep 2012.\n[9] A. Marcus, E. Wu, D. R. Karger, S. Madden, and R. C. Miller. Demonstration of\nqurk: a query processor for humanoperators. In Sellis et al. [13], pages 1315–1318.\n[10] A. Marcus, E. Wu, D. R. Karger, S. Madden, and R. C. Miller. Human-powered\nsorts and joins. PVLDB , 5(1):13–24, 2011.\n[11] H. Park, R. Pang, A. G. Parameswaran, H. Garcia-Molina, N. Polyzotis, and\nJ. Widom. Deco: A system for declarative crowdsourcing. PVLDB , 5(12):1990–\n1993, 2012.\n[12] M. Schulze. A new monotonic, clone-independent, reversal symmetric, and\ncondorcet-consistent single-winner election method. Social Choice and Welfare ,\n36(2):267–303, 2011.\n[13] T. K. Sellis, R. J. Miller, A. Kementsietsidis, and Y . Velegrakis, editors. Proceedings\nof the ACM SIGMOD International Conference on Management of Data, SIGMOD\n2011, Athens, Greece, June 12-16, 2011 . ACM, 2011.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "6g6Y0DGvRk", "year": null, "venue": "EAIS 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=6g6Y0DGvRk", "arxiv_id": null, "doi": null }
{ "title": "GUapp: A Conversational Agent for Job Recommendation for the Italian Public Administration", "authors": [ "Vito Bellini", "Giovanni Maria Biancofiore", "Tommaso Di Noia", "Eugenio Di Sciascio", "Fedelucio Narducci", "Claudio Pomo" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "vXvw32grMPe", "year": null, "venue": "EAIS 2018", "pdf_link": "https://ieeexplore.ieee.org/iel7/8388184/8397169/08397177.pdf", "forum_link": "https://openreview.net/forum?id=vXvw32grMPe", "arxiv_id": null, "doi": null }
{ "title": "Deep reinforcement learning for frontal view person shooting using drones", "authors": [ "Nikolaos Passalis", "Anastasios Tefas" ], "abstract": "Unmanned Aerial Vehicles (UAVs), also known as drones, are increasingly used for a wide variety of novel tasks, including drone-based cinematography. However, flying drones in such setting requires the coordination of several people, increasing the cost of using drones for aerial cinematography and limiting the shooting flexibility by putting a significant cognitive load on the director and drone/camera operators. To overcome some of these limitation, this paper proposes a deep reinforcement learning (RL) method for performing autonomous frontal view shooting. To this end, a realistic simulation environment is developed, which ensures that the learned agent can be directly deployed on a drone. Then, a deep RL algorithm, tailored to the needs of the specific application, is derived building upon the well known deep Q-learning approach. The effectiveness of the proposed technique is experimentally demonstrated using several quantitative and qualitative experiments.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yvi2w_wGTOY", "year": null, "venue": "EAIS 2012", "pdf_link": "https://ieeexplore.ieee.org/iel5/6225463/6232786/06232798.pdf", "forum_link": "https://openreview.net/forum?id=yvi2w_wGTOY", "arxiv_id": null, "doi": null }
{ "title": "Online learning with kernels in classification and regression", "authors": [ "Guoqi Li", "Guangshe Zhao" ], "abstract": "New optimization models and algorithms for online learning with kernels (OLK) in classification and regression are proposed in a Reproducing Kernel Hilbert Space (RKHS) by solving a constrained optimization model. The “forgetting” factor in the model makes it possible that the memory requirement of the algorithm can be bounded as the learning process continues. The applications of the proposed OLK algorithms in classification and regression show their effectiveness in comparing with the state of art algorithms.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0KlhaP9dy-y", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368781.pdf", "forum_link": "https://openreview.net/forum?id=0KlhaP9dy-y", "arxiv_id": null, "doi": null }
{ "title": "An entropy-based method for estimating demographic trends", "authors": [ "Guang-She Zhao", "Yi Xu", "Guoqi Li", "Zhao-Xu Yang" ], "abstract": "In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1) Prediction of the age distribution of a country's population based on an “age-structured population model”; 2) Estimation the age distribution of each individual household size with an entropy-based formulation based on an “individual household size model”; and 3) Estimation the number of each household size based on a “total household size model”. The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1) onto the age distributions of individual household sizes (obtained in stage 2). The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "DsrhyzVWRFl", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122753.pdf", "forum_link": "https://openreview.net/forum?id=DsrhyzVWRFl", "arxiv_id": null, "doi": null }
{ "title": "Deep Learning-Based Adaptive Image Compression System for a Real-World Scenario", "authors": [ "Vito Walter Anelli", "Yashar Deldjoo", "Tommaso Di Noia", "Daniele Malitesta" ], "abstract": "Deep Learning-based (DL) image compression has shown prominent results compared to standard image compression techniques like JPEG, JPEG2000, BPG and WebP. Nevertheless, neither DL nor standard techniques generally can cope with critical real-world scenarios, with stringent performance constraints. In order to explore the nature of this gap, we first introduce an industrial scenario, which contemplates real-time compression of high-resolution images, with strict requirements on a number of quality-performance indicators, namely: the output image quality, the hardware, and the compression complexity. Next, we propose a DL-based image compression model, i.e. a Convolutional Residual Autoencoder (CRAE). In particular, CRAE integrates some structural benefits of a deep neural network, including PReLU activation function and sub-pixel convolution, which have proven to be especially suitable for image compression tasks. We analyze the performance of the proposed CRAE approach by adopting two types of processing: (i) global and, (ii) patch-based processing of image data. To test the models, we exploit a dataset composed of high-resolution images provided by the MERMEC company composed of consecutive images of the railway track captured by a machine vision system called V-CUBE. Furthermore, the company provided strict compression requirements that needed to be met by the developed system. Preliminary results of an ongoing study indicates that the proposed image compression system can meet the requirements by MERMEC with reasonable performance, with a mild advantage observed for full-based CRAE. The obtained outcomes suggests that CRAE can adapt to the specific structure of the given dataset and extracts the salient recurrent patterns inside an image. In summary, this line of research represents the core of the future plug-and-play DL architecture for constrained image compression.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RkMV-wVUGC", "year": null, "venue": "EAIS 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9787685/9787686/09787753.pdf", "forum_link": "https://openreview.net/forum?id=RkMV-wVUGC", "arxiv_id": null, "doi": null }
{ "title": "Online Monitoring of Stance from Tweets: The case of Green Pass in Italy", "authors": [ "Alessandro Bondielli", "Giuseppe Cancello Tortora", "Pietro Ducange", "Armando Macri", "Francesco Marcelloni", "Alessandro Renda" ], "abstract": "Stance detection on social media has attracted a lot of attention in the last few years, as opinionated posts are an invaluable source of information which can possibly be exploited in dedicated systems. This is especially true in the case of particularly polarizing topics for which there is no clear consensus among population. In this paper, we focus on one of these topics, namely the EU digital COVID certificate (also known as Green Pass), with the objective of uncovering the stance towards it in a specific time period for the Italian Twitter community. To this aim, we first tested some classifiers for determining the most suitable one in terms of performance and complexity for the stance detection problem under consideration. Then, we compared several approaches aimed at counteracting the occurrence of concept drift, i.e., that phenomenon for which the characteristics of the dataset vary over time, possibly resulting in a degradation of classification accuracy. Our experimental analysis suggests that updating the classifier during the stance monitoring campaign is crucial for maintaining a satisfactory level of performance. Finally, we deployed our system to monitor the stance on the topic of Green Pass expressed in tweets published from July to December 2021 and to obtain insights about its evolution.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "g3KmAoqLFp79", "year": null, "venue": "EAIS 2013", "pdf_link": "https://ieeexplore.ieee.org/iel7/6588691/6604096/06604112.pdf", "forum_link": "https://openreview.net/forum?id=g3KmAoqLFp79", "arxiv_id": null, "doi": null }
{ "title": "Online identification of complex multi-input-multi-output system based on generic evolving neuro-fuzzy inference system", "authors": [ "Mahardhika Pratama", "Sreenatha G. Anavatti", "Matthew A. Garratt", "Edwin Lughofer" ], "abstract": "Nowadays, unmanned aerial vehicles (UAV) play a noteworthy role in miscellaneous defence and civilian operation. A major facet in the UAV control system is an identification phase feeding the valid and up-to-date information of the system dynamic in order to generate proper adaptive control action to handle various UAV maneuvers. UAV, however, constitutes a complex system possessing a highly non-linear property. Conversely, the learning environment in modeling UAV's dynamic varies overtime and demands online learning scheme encouraging a fully adaptive and evolving algorithm with a mild computational load to settle the task. In contrast, contemporaneous literatures scrutinizing the identification of UAV dynamic yet rely on offline or batched learning procedures. Evolving neuro-fuzzy system (ENFS) where the landmarks are flexible rule base and usable in the time-critical applications offers a promising impetus in the UAV research field, and in particular its identification standpoint. The principle cornerstone is ENFS can commence its learning mechanism from scratch with an empty rule base and very limited expert knowledge. Nonetheless, it can perform automatic knowledge building from streaming data without catastrophic forgetting previous valid knowledge which is alike autonomous mental development of human brain. This paper elaborates the identification of rotary wing UAV based on our incipient ENFS algorithm termed generic evolving neuro-fuzzy system (GENEFIS). In summary, our algorithm can not only trace footprint of the UAV dynamic but also ameliorate the performance of existing ENFS in terms of predictive quality and resultant rule base burden.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cehY-TOeNV", "year": null, "venue": "EAIS 2017", "pdf_link": "https://ieeexplore.ieee.org/iel7/7947643/7954814/07954836.pdf", "forum_link": "https://openreview.net/forum?id=cehY-TOeNV", "arxiv_id": null, "doi": null }
{ "title": "Incremental rule splitting in generalized evolving fuzzy regression models", "authors": [ "Edwin Lughofer", "Mahardhika Pratama", "Igor Skrjanc" ], "abstract": "We propose an incremental rule splitting concept for generalized fuzzy rules in evolving fuzzy regression models in order to properly react on gradual drifts and to compensate inappropriate settings of rule evolution parameters; both occurrences may lead to oversized rules with untypically large local errors, which also usually affects the global model error. The generalized rules are directly defined in the multi-dimensional feature space through a kernel function, and thus allowing any rotated orientation of their shapes. Our splitting condition is based 1.) on the local error of rules measured in terms of a weighted contribution to the whole model error and 2.) on the size of the rules measured in terms of its volume. Thereby, we use the concept of statistical process control for automatic thresholding, in order to omit two extra parameters. The splitting technique relies on the eigendecompisition of the rule covariance matrix by adequately manipulating the largest eigenvector and eigenvalues in order to retrieve the new centers and contours of the two split rules. Thus, splitting is performed along the main principal component direction of a rule. The splitting concepts are integrated in the generalized smart evolving learning engine (Gen-Smart-EFS) and successfully tested on two real-world application scenarios, engine test benches and rolling mills, the latter including a real-occurring gradual drift (whose position in the data is known). Results show clearly improved error trend lines over time when splitting is applied: reduction of the error by about one third (rolling mills) and one half (engine test benches). In case of rolling mills, three rule splits right after the gradual drift starts were essential for this significant improvement.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "clB2C2MVQUF", "year": null, "venue": "EAIS 2018", "pdf_link": "https://ieeexplore.ieee.org/iel7/8388184/8397169/08397186.pdf", "forum_link": "https://openreview.net/forum?id=clB2C2MVQUF", "arxiv_id": null, "doi": null }
{ "title": "Evolving time-series based prediction models for quality criteria in a multi-stage production process", "authors": [ "Edwin Lughofer", "Robert Pollak", "Pauline Meyer-Heye", "Helmut Zörrer", "Christian Eitzinger", "Jasmin Lehner", "Thomas Radauer", "Mahardhika Pratama" ], "abstract": "We address the problem of predicting product quality for a latter stage in a production process already at an early stage. Thereby, the idea is to use time-series of process values, recorded during the on-line production process of the early stage and containing possible system dynamics and variations according to parameter settings or different environmental conditions, as input to predict the final quality criteria in the latter stage. We apply a non-linear partial least squares (PLS) variant for reducing the high input dimensionality of time-series batch-process problems, by combining PLS with generalized Takagi-Sugeno fuzzy systems, a new extended variant of classical TS fuzzy system (thus termed as PLS-Fuzzy). This combination opens the possibility to resolve non-linearities in the PLS score space without requiring extra pre-tuning parameters (as is the case in other non-linear PLS variants). The models are trained by an evolving and iterative vector quantization approach to find the optimal number of rules and their ideal positioning and shape, combined with a fuzzily weighted version of elastic net regu-larization for robust estimation of the consequent parameters. The adaptation algorithm of the models (termed as IPLS-GEFS) includes an on-the-fly evolving rule learning concept (GEFS), a forgetting strategy with dynamically varying forgetting factor in case of drifts (to increase flexibility by outweighing older samples) as well as a new variant for an incremental single-pass update of the latent variable space (IPLS). The latter can be seen as a new variant for incremental dimension reduction and subspace update and is necessary when the covariance characteristics between input and output changes. Results on a real-world data set from microfluidic chip production show a comparable performance of PLS-Fuzzy with random forests, extreme learning machines and deep learning with MLP neural networks, achieving low prediction errors (below 10%) with low model complexity. Updating the models with new on-line data — only achievable with our method, as the others are batch off-line methods (with mostly slow re-training phases) — decreased the model errors, at most when including incremental latent variable space update.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "5HPVzt2IXR", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122772.pdf", "forum_link": "https://openreview.net/forum?id=5HPVzt2IXR", "arxiv_id": null, "doi": null }
{ "title": "Online Sequential Ensembling of Fuzzy Systems", "authors": [ "Edwin Lughofer", "Mahardhika Pratama" ], "abstract": "Evolving fuzzy systems (EFS) have enjoyed a wide attraction in the community to handle learning from data streams in an incremental, single-pass and transparent manner. The main concentration so far lied in the development of approaches for single EFS models. Forgetting mechanisms have been used to increase their flexibility, especially to adapt quickly to changing situations such as drifting data distributions. These, however, require a forgetting factor steering the degree of timely outweighing older learned concepts. Furthermore, as being pure supervised incremental methods, they typically assume that target reference values are immediately available without any delays. In this paper, we propose a new concept of learning fuzzy systems from data streams, which we call sequential ensembling. It is able to model the recent dependencies in streams on a chunk-wise basis: for each new incoming chunk, a new fuzzy model is trained from scratch and added to the ensemble (of fuzzy systems trained before). The point is that a new chunk can be used for establishing a new fuzzy model as soon as the target values are available. This induces i.) flexibility for respecting the actual system delay in receiving target values (by setting the chunksize adequately) and ii.) fast drift handling possibilities. The latter are realized with specific prediction techniques for new data chunks based on the sequential ensemble members trained so far over time, for which we propose four different variants. These include specific spatial and timely uncertainty concepts. Finally, in order to cope with large-scale and (theoretically) infinite data streams within a reasonable amount of prediction time, we demonstrate a concept for pruning past ensemble members. The results based on two data streams show significantly improved performance compared to single EFS models in terms of a better convergence of the accumulated chunk-wise ahead prediction error trends over time. This is especially true in the case of abrupt and gradual drifts appearing in the target concept, where the sequential ensemble (especially due to recent weak members) is able to react more flexibly and quickly than (more heavy) single EFS models. In the case of input space drifts and new operating conditions, the more advanced prediction schemes, which include uncertainty weighing concepts, can significantly outperform standard averaging over all members' outputs.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "4_n1n-K9xq", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122752.pdf", "forum_link": "https://openreview.net/forum?id=4_n1n-K9xq", "arxiv_id": null, "doi": null }
{ "title": "Scalable Teacher-Forcing Networks under Spark Environments for Large-Scale Streaming Problems", "authors": [ "Choiru Za'in", "Andri Ashfahani", "Mahardhika Pratama", "Edwin Lughofer", "Eric Pardede" ], "abstract": "Large-scale data streams remains an open issue in the existing literature. It features a never ending information flow, mostly going beyond the capacity of a single processing node. Nonetheless, algorithmic development of large-scale streaming algorithms under distributed platforms faces major challenge due to the scalability issue. The network complexity exponentially grows with the increase of data batches, leading to an accuracy loss if the model fusion phase is not properly designed. A largescale streaming algorithm, namely Scalable Teacher Forcing Network (ScatterNet), is proposed here. ScatterNet has an elastic structure to handle the concept drift in the local scale within the data batch or in the global scale across batches. It is built upon the teacher forcing concept providing a short-term memory aptitude. ScatterNet features the data-free model fusion approach which consists of the zero-shot merging mechanism and the online model selection. Our numerical study demonstrates the moderate improvement of prediction accuracy by ScatterNet while gaining competitive advantage in terms of the execution time compared to its counterpart.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "L6KmftF7OGP", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122698.pdf", "forum_link": "https://openreview.net/forum?id=L6KmftF7OGP", "arxiv_id": null, "doi": null }
{ "title": "Emotions Understanding Model from Spoken Language using Deep Neural Networks and Mel-Frequency Cepstral Coefficients", "authors": [ "Marco Giuseppe de Pinto", "Marco Polignano", "Pasquale Lops", "Giovanni Semeraro" ], "abstract": "The ability to understand people through spoken language is a skill that many human beings take for granted. On the contrary, the same task is not as easy for machines, as consequences of a large number of variables which vary the speaking sound wave while people are talking to each other. A sub-task of speeches understanding is about the detection of the emotions elicited by the speaker while talking, and this is the main focus of our contribution. In particular, we are presenting a classification model of emotions elicited by speeches based on deep neural networks (CNNs). For the purpose, we focused on the audio recordings available in the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset. The model has been trained to classify eight different emotions (neutral, calm, happy, sad, angry, fearful, disgust, surprise) which correspond to the ones proposed by Ekman plus the neutral and calm ones. We considered as evaluation metric the F1 score, obtaining a weighted average of 0.91 on the test set and the best performances on the \"Angry\" class with a score of 0.95. Our worst results have been observed for the sad class with a score of 0.87 that is nevertheless better than the state-of-the-art. In order to support future development and the replicability of results, the source code of the proposed model is available on the following GitHub repository: https://github.com/marcogdepinto/Emotion-Classification-Ravdess.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Actw4Lc3lZZ", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122760.pdf", "forum_link": "https://openreview.net/forum?id=Actw4Lc3lZZ", "arxiv_id": null, "doi": null }
{ "title": "Training in a Virtual Learning Environment: A Process Mining Approach", "authors": [ "Annalisa Appice", "Pasquale Ardimento", "Donato Malerba", "Giuseppe Modugno", "Diego Marra", "Marco Mottola" ], "abstract": "Upgrading the workers' skills through adequate training is highly demanded in modern companies seeking for overall productivity and competitiveness. Currently the spread of technology-based training has fostered the development of new training approaches appositely designed for workers. In this paper, we introduce a new training platform that uses a virtual learning environment that combines virtual reality (VR) and 360 degree technologies. In particular, we focus on the problem of predicting the outcome of the workers' training based on how the workers have behaved in the virtual learning environment during their training session. To this aim, we formulate a process mining methodology that combines features engineering and classification algorithms. The effectiveness of the proposed methodology has been validated against a real use case.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "puxg1Dp6DD", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122749.pdf", "forum_link": "https://openreview.net/forum?id=puxg1Dp6DD", "arxiv_id": null, "doi": null }
{ "title": "Saliency Detection for Hyperspectral Images via Sparse-Non Negative-Matrix-Factorization and novel Distance Measures*", "authors": [ "Antonella Falini", "Graziano Castellano", "Cristiano Tamborrino", "Francesca Mazzia", "Rosa Maria Mininni", "Annalisa Appice", "Donato Malerba" ], "abstract": "Saliency detection is a very active area in computer vision. When hyperspectral images are analyzed, a big amount of data need to be processed. Hence, dimensionality reduction techniques are used to highlight salient pixels allowing us to neglect redundant features. We propose a bottom-up approach based on two main ingredients: sparse non negative matrix factorization (SNMF) and spatial and spectral distances between the input image and the reconstructed one. In particular, we use both well known and novel distance functions. The method is validated on both hyperspectral and multispectral images.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ig8SrPL-_H", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122780.pdf", "forum_link": "https://openreview.net/forum?id=ig8SrPL-_H", "arxiv_id": null, "doi": null }
{ "title": "Analyzing Dynamic Social Media Data via Random Projection - A New Challenge for Stream Classifiers", "authors": [ "Moritz Heusinger", "Christoph Raab", "Frank-Michael Schleif" ], "abstract": "In recent years social media became an important part of everyday life for many people. A big challenge of social media is, to find posts, which are interesting for the user. Many social networks like Twitter handle this problem with so called hashtags. A user can label his own Tweet (post) with a hashtag, while other users can search for posts containing a specified hashtag. But what about finding posts which are not labeled by the creator? We provide a way of completing hashtags for unlabeled posts using classification on a novel real world Twitter data stream. New posts will be created every second, thus this context fits perfectly for non-stationary data analysis. Our goal is to show, how labels (hashtags) of social media posts can be predicted by streaming classifiers. In particular we employ Random Projection (RP) as a preprocessing step in calculating streaming models. Also we provide a novel real world data set for streaming analysis called NSDQ with a comprehensive data description. We show that this dataset is a real challenge for stateof-the-art stream classifiers. While RP has been widely used and evaluated in stationary data analysis scenarios, non-stationary environments are not well analyzed. In this paper we provide a use case of RP on real world streaming data, especially on NSDQ dataset. We discuss why RP can be used in this scenario and how it can handle stream specific situations like concept drift. We also provide experiments with RP on streaming data, using state-of-the-art streaming classifiers like Adaptive Random Forest and concept drift detectors.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ZHr7b3VDyeO", "year": null, "venue": "EAIS 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9787685/9787686/09787693.pdf", "forum_link": "https://openreview.net/forum?id=ZHr7b3VDyeO", "arxiv_id": null, "doi": null }
{ "title": "Adaptive Classification of Occluded Facial Expressions of Affective States", "authors": [ "Jordan Vice", "Masood Mehmood Khan", "Iain Murray", "Svetlana N. Yanushkevich" ], "abstract": "Internationally, the recent pandemic caused severe social changes forcing people to adopt new practices in their daily lives. One of these changes requires people to wear masks in public spaces to mitigate the spread of viral diseases. Affective state assessment (ASA) systems that rely on facial expression analysis become impaired and less effective due to the presence of visual occlusions caused by wearing masks. Therefore, ASA systems need to be future-proofed and equipped with adaptive technologies to be able to analyze and assess occluded facial expressions, particularly in the presence of masks. This paper presents an adaptive approach for classifying occluded facial expressions when human faces are partially covered with masks. We deployed an unsupervised, cosine similarity-based clustering approach exploiting the continuous nature of the extended Cohn-Kanade (CK+) dataset. The cosine similarity-based clustering resulted in twenty-one micro-expression clusters that describe minor variations of human facial expressions. Linear discriminant analysis was used to project all clusters onto lower-dimensional discriminant feature spaces, allowing for binary occlusion classification and the dynamic assessment of affective states. During the validation stage, we observed 100% accuracy when classifying faces with features extracted from the lower part of the occluded faces (occlusion detection). We observed 76.11% facial expression classification accuracy when features were gathered from the uncovered full-faces and 73.63% classification accuracy when classifying upper-facial expressions - applied when the lower part of the face is occluded. The presented system promises an improvement to visual inspection systems through an adaptive occlusion detection and facial expression classification framework.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "njK_zKi6Q_d", "year": null, "venue": "EAIS 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9787685/9787686/09787730.pdf", "forum_link": "https://openreview.net/forum?id=njK_zKi6Q_d", "arxiv_id": null, "doi": null }
{ "title": "Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability", "authors": [ "Jordan Vice", "Masood Mehmood Khan", "Tele Tan", "Svetlana N. Yanushkevich" ], "abstract": "Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressions of affective states. This paper investigates how mutual information-carrying continuous models can be extracted and used in continuous and dynamic facial expression classification systems for improving the efficacy and reliability of ASA systems. A novel, hybrid learning model that projects continuous data onto a multidimensional hyperplane is proposed. Through cosine similarity-based clustering (unsupervised) and classification (supervised) processes, our hybrid approach allows us to transform seven, discrete facial expression models into twenty-one facial expression models that include micro-expressions. The proposed continuous, dynamic classifier was able to achieve greater than 73% accuracy when experimented with Random Forest, Support Vector Machine (SVM) and Neural Network classification architectures. The presented system was validated using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and the extended Cohn-Kanade (CK+) dataset.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "-3rktfgwN2", "year": null, "venue": "EAIS 2017", "pdf_link": "https://ieeexplore.ieee.org/iel7/7947643/7954814/07954842.pdf", "forum_link": "https://openreview.net/forum?id=-3rktfgwN2", "arxiv_id": null, "doi": null }
{ "title": "Estimation of moving agents density in 2D space based on LSTM neural network", "authors": [ "Marsela Polic", "Ziad Salem", "Karlo Griparic", "Stjepan Bogdan", "Thomas Schmickl" ], "abstract": "As a part of ASSISIbf project, with a final goal of forming a collective adaptive bio-hybrid society of animals and robots, an artificial neural network based on LSTM architecture was designed and trained for bee density estimation. During experiments, the bees are placed inside a plastic arena covered with wax, where they interact with and adapt to specialized static robotic units, CASUs, designed specially for this project. In order to interact with honeybees, the CASUs require the capability i) to produce and perceive the stimuli, i.e., environmental cues, that are relevant to honeybee behaviour, and ii) to sense the honeybees presence. The second requirement is implemented through 6 proximity sensors mounted on the upper part of CASU. In this paper we present estimation of honeybees (moving agents) density in 2D space (experimental arena) that is based on LSTM neural network. When compared to previous work done in this field, experiments demonstrate satisfactory results in estimating sizes of bee groups placed in the arena within a larger scope of outputs. Two different approaches were tested: regression and classification, with classification yielding higher accuracy.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "vPjTJbEdej", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122758.pdf", "forum_link": "https://openreview.net/forum?id=vPjTJbEdej", "arxiv_id": null, "doi": null }
{ "title": "Evaluation of Cognitive Impairment in Pediatric Multiple Sclerosis with Machine Learning: An Exploratory Study of miRNA Expressions", "authors": [ "Gabriella Casalino", "Gennaro Vessio", "Arianna Consiglio" ], "abstract": "Multiple Sclerosis (MS) is a demyelinating autoimmune disease that usually affects young adults; however, recently some symptoms of cognitive impairment have been recognized as early signs of MS onset in pediatric patients (PedMS). The underlying relationships between these two conditions, as well as their molecular markers, have not been fully understood yet. In this work, we analyze microRNAs (miRNAs) expression profiles of PedMS patients with machine learning algorithms in order to create effective models able to detect the presence of cognitive impairment. In particular, we compare three different classification algorithms, fed with features automatically selected by a feature selection strategy. Experimental results show that linear support vector machines achieved the best performance. Moreover, we discuss the importance of ten of the most discriminant automatically selected miRNAs. A graphical analysis of these features highlights the relationships among miRNAs and the two classes the patients belongs to.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lWjku5ZFFJY", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122777.pdf", "forum_link": "https://openreview.net/forum?id=lWjku5ZFFJY", "arxiv_id": null, "doi": null }
{ "title": "Exploiting Categorization of Online News for Profiling City Areas", "authors": [ "Alessandro Bondielli", "Pietro Ducange", "Francesco Marcelloni" ], "abstract": "Profiling city areas, in terms of citizens' behaviour and commercial and social activities, is an interesting issue in the context of smart cities, especially considering a real-time streaming context. Several methods have been proposed in the literature, exploiting different data sources. In this paper, we propose an approach to perform profiling of city areas based on articles of local online newspapers, by exploiting information regarding the text as well as metadata such as geo-localization and tags. In particular, we use tags associated with each article for identifying macro-categories through clustering analysis on tags embeddings. Further, we employ a text categorization model based on SVM to label online a new article, represented as Bag-of-Words, with one of such categories. The categorization approach has been integrated into a framework recently proposed by the authors for profiling city areas exploiting different web sources of data: the online newspapers are monitored continuously, thus producing a news stream to be analysed. We show experiments performed on the city of Rome, considering data from 2014 to 2018. We discuss the results obtained by adopting different classifiers and present that the best classifier, namely an SVM, can achieve an accuracy and an f1-score up to 93% and 79%, respectively.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "51nB9fYocR", "year": null, "venue": "EAIS 2017", "pdf_link": "https://ieeexplore.ieee.org/iel7/7947643/7954814/07954837.pdf", "forum_link": "https://openreview.net/forum?id=51nB9fYocR", "arxiv_id": null, "doi": null }
{ "title": "Self-evolving kernel recursive least squares algorithm for control and prediction", "authors": [ "Zhao-Xu Yang", "Hai-Jun Rong", "Guang-She Zhao", "Jing Yang" ], "abstract": "This paper presents a self-evolving kernel recursive least squares (KRLS) algorithm which implements the modelling of unknown nonlinear systems in reproducing kernel Hilbert spaces (RKHS). The prime motivation of this development is a reformulation of the well known KRLS algorithm which inevitably increases the computational complexity to the cases where data arrive sequentially. The self-evolving KRLS algorithm utilizes the measurement of kernel evaluation and adaptive approximation error to determine the learning system with a structure of a suitable size that involves recruiting and dimension reduction of the kernel vector during the adaptive learning phase without predefining them. This self-evolving procedure allows the algorithm to operate online, often in real time, reducing the computational time and improving the learning performance. This algorithm is finally utilized in the applications of online adaptive control and time series prediction where the system is described as a unknown function by Nonlinear AutoRegressive with Exogenous inputs model. Simulation results from an inverted pendulum system and Time Series Data Library demonstrate the satisfactory performance of the proposed self-evolving KRLS algorithm.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "sHtW2_nGhi", "year": null, "venue": "EAIS 2011", "pdf_link": "https://ieeexplore.ieee.org/iel5/5936949/5945904/05945914.pdf", "forum_link": "https://openreview.net/forum?id=sHtW2_nGhi", "arxiv_id": null, "doi": null }
{ "title": "A Simplified Structure Evolving Method for Fuzzy System structure learning", "authors": [ "Di Wang", "Xiao-Jun Zeng", "John A. Keane" ], "abstract": "This paper proposes a Simplified Structure Evolving Method (SSEM) for Fuzzy Systems, which improves our previous work of Structure Evolving Learning Method for Fuzzy Systems (SELM [1]). SSEM keeps all the advantages of SELM [1] and improve SELM by starting with the simplest fuzzy rule set with only one fuzzy rule (instead of 2 <sup xmlns:mml=\"http://www.w3.org/1998/Math/MathML\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">n</sup> fuzzy rules in SELM) as the starting point. By doing this SSEM is able to select the most efficient partitions and the most efficient attributes as well for system identification. This improvement enables fuzzy systems applicable to high dimensional problems. Benchmark examples with high dimension inputs are given to illustrate the advantages of the proposed algorithm.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mtZq5xmr5De", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122706.pdf", "forum_link": "https://openreview.net/forum?id=mtZq5xmr5De", "arxiv_id": null, "doi": null }
{ "title": "Double Deep Q Network with In-Parallel Experience Generator", "authors": [ "Vincenzo Dentamaro", "Donato Impedovo", "Giuseppe Pirlo", "Giacomo Abbattista", "Vincenzo Gattulli" ], "abstract": "In this paper, an algorithm, for in-parallel, greedy experience generator (briefly IPE, In Parallel Experiences), has been crafted, and added to the Double Deep Q-Learning algorithm. The algorithm aims to perturbs the weights of the online network, and as results, the network, trying to recover from the perturbed weights, escapes from the local minima. DDQN with IPE takes about the double of time of the previous to compute, but even if it slows down the learning rate in terms of wall clock time, the solution converges faster in terms of number of epochs.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "D8Nt4vgAlta", "year": null, "venue": "EAIS 2020", "pdf_link": "https://ieeexplore.ieee.org/iel7/9119729/9122692/09122751.pdf", "forum_link": "https://openreview.net/forum?id=D8Nt4vgAlta", "arxiv_id": null, "doi": null }
{ "title": "Vertex Feature Classification (VFC)", "authors": [ "Vincenzo Dentamaro", "Donato Impedovo", "Giuseppe Pirlo", "Alessandro Massaro" ], "abstract": "In this paper a new algorithm for multi-class classification is presented. The algorithm, that is named Vertex Feature Classification (VFC), maps input data sets into an ad-hoc built space, called \"simplex space\" in order to perform geometric classification. More precisely, each class is first associated to a specific vertex of the polytope computed in the feature space. Successively, pattern classification is performed according to the geometric arrangement of patterns in a higher dimensional feature space. The experimental results, carried out on datasets of the UCI Machine Learning Repository, demonstrate the accuracy of the new algorithm is comparable with KNN, VDA and SVM, without or with a little training phase. An important aspect of this algorithm is its training time, which takes often a few milliseconds. Furthermore, the algorithm is robust and computationally efficient.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "OtT949P0k8B", "year": null, "venue": "EAIS 2018", "pdf_link": "https://ieeexplore.ieee.org/iel7/8388184/8397169/08397178.pdf", "forum_link": "https://openreview.net/forum?id=OtT949P0k8B", "arxiv_id": null, "doi": null }
{ "title": "An adaptable deep learning system for optical character verification in retail food packaging", "authors": [ "Fabio De Sousa Ribeiro", "Francesco Calivá", "Mark Swainson", "Kjartan Gudmundsson", "Georgios Leontidis", "Stefanos D. Kollias" ], "abstract": "Retail food packages contain various types of information such as food name, ingredients list and use by dates. Such information is critical to ensure proper distribution of products to the market and eliminate health risks due to erroneous mislabelling. The latter is considerably detrimental to both consumers and suppliers alike. In this paper, an adaptable deep learning based system is proposed and tested across various possible scenarios: a) for the identification of blurry images and/or missing information from food packaging photos. These were captured during the validation process in supply chains; b) for deep neural network adaptation. This was achieved through a novel methodology that utilises facets of the same convolutional neural network architecture. Latent variables were extracted from different datasets and used as input into a Λ-means clustering and Λ-nearest neighbour classification algorithm, to compute a new set of centroids which better adapts to the target dataset's distribution. Furthermore, visualisation and analysis of network adaptation provides insight into how higher accuracy was achieved when compared to the original deep neural network. The proposed system performed very well in the conducted experiments, showing that it can be deployed in real-world supply chains, for automating the verification process, cutting down costs and eliminating errors that could prove risky for public health.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Gh0Is0LvGCD", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368774.pdf", "forum_link": "https://openreview.net/forum?id=Gh0Is0LvGCD", "arxiv_id": null, "doi": null }
{ "title": "Preface", "authors": [ "Moamar Sayed Mouchaweh", "Anthony Fleury", "Plamen P. Angelov", "Edwin Lughofer", "José Antonio Iglesias" ], "abstract": "Today world is changing very fast and the volume of automatically generated data is constantly increasing over time. Moreover, the rapid technological developments have led to a significant growth in system complexity as well as of the influence of its interaction with the surrounding environments. In these conditions, building models describing the behaviour of a system requires to extract useful information from data streams of unbounded size, arriving at high steady rate and may evolve overtime.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qrde3igfD9jN", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368802.pdf", "forum_link": "https://openreview.net/forum?id=qrde3igfD9jN", "arxiv_id": null, "doi": null }
{ "title": "Drift detection in data stream classification without fully labelled instances", "authors": [ "Edwin Lughofer", "Eva Weigl", "Wolfgang Heidl", "Christian Eitzinger", "Thomas Radauer" ], "abstract": "Drift detection is an important issue in classification-based stream mining in order to be able to inform the operators in case of unintended changes in the system. Usually, current detection approaches rely on the assumption to have fully supervised labeled streams available, which is often a quite unrealistic scenario in on-line real-world applications. We propose two ways to improve economy and applicability of drift detection: 1.) a semi-supervised approach employing single-pass active learning filters for selecting the most interesting samples for supervising the performance of classifiers and 2.) a fully unsupervised approach based on the overlap degree of classifier's output certainty distributions. Both variants rely on a modified version of the Page-Hinkley test, where a fading factor is introduced to outweigh older samples, making it more flexible to detect successive drift occurrences in a stream. The approaches are compared with the fully supervised variant (SoA) on two real-world on-line applications: the semi-supervised approach is able to detect three real-occurring drifts in these streams with an even lower than resp. the same delay as the supervised variant of about 200 (versus 300) resp. 70 samples, and this by requiring only 20% labelled samples.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "BkJT8R6XE2n", "year": null, "venue": "EAIS 2013", "pdf_link": "https://ieeexplore.ieee.org/iel7/6588691/6604096/06604103.pdf", "forum_link": "https://openreview.net/forum?id=BkJT8R6XE2n", "arxiv_id": null, "doi": null }
{ "title": "eVQ-AM: An extended dynamic version of evolving vector quantization", "authors": [ "Edwin Lughofer" ], "abstract": "In this paper, we are presenting a new dynamically evolving clustering approach which extends conventional evolving Vector Quantization (eVQ), successfully applied before as fast learning engine for evolving cluster models, classifiers and evolving fuzzy systems in various real-world applications. The first extension concerns the ability to extract ellipsoidal prototype-based clusters in arbitrary position, thus increasing its flexibility to model any orentiation/rotation of local data clouds. The second extension includes a single-pass merging strategy in order to resolve unnecessary overlaps or to dynamically compensate inappropriately chosen learning parameters (which may lead to over-clustering effects). The new approach, termed as eVQ-AM (eVQ for Arbitrary ellipsoids with Merging functionality), is compared with conventional eVQ, other incremental and batch learning clustering methods based on two-dimensional as well as high-dimensional streaming clustering showing an evolving behavior in terms of adding/joining clusters as well as feature range expansions. The comparison includes a sensitivity analysis on the learning parameters and observations of finally achieved cluster partition qualities.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ipIm54-kbE8F", "year": null, "venue": "EAIS 2013", "pdf_link": "https://ieeexplore.ieee.org/iel7/6588691/6604096/06604099.pdf", "forum_link": "https://openreview.net/forum?id=ipIm54-kbE8F", "arxiv_id": null, "doi": null }
{ "title": "Resolving global and local drifts in data stream regression using evolving rule-based models", "authors": [ "Ammar Shaker", "Edwin Lughofer" ], "abstract": "In this paper, we present new concepts for dealing with drifts in data streams during the run of on-line modeling processes for regression problems in the context of evolving fuzzy systems. Opposed to the nominal case based on conventional life-long learning, drifts are requiring a specific treatment for the modeling phase, as they refer to changes in the underlying data distribution or target concepts, which makes older learned concepts obsolete. Our approach comes with three new stages for an appropriate drift handling: 1.) drifts are not only detected, but also quantified with a new extended version of the Page-Hinkley test, which overcomes some instabilities during downtrends of the indicator; 2.) based on the current intensity quantification of the drift, the necessary degree of forgetting (weak to strong) is extracted (adaptive forgetting); 3.) the latter is achieved by two variants, a.) a single forgetting factor value, accounting for global drifts, and b.) a forgetting factor vector with different entries for separate regions of the feature space, accounting for local drifts. Forgetting factors are integrated into the learning scheme of both, the antecedent and consequent parts of the evolving fuzzy systems. The new approach will be evaluated on high-dimensional data streams, where the results will show that 1.) our adaptive forgetting strategy outperforms the usage of fixed forgetting factors throughout the learning process and 2.) forgetting in local regions may improve forgetting in global ones when drifts appear locally.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Ono7E-v8tIA", "year": null, "venue": "EAIS 2012", "pdf_link": "https://ieeexplore.ieee.org/iel5/6225463/6232786/06232795.pdf", "forum_link": "https://openreview.net/forum?id=Ono7E-v8tIA", "arxiv_id": null, "doi": null }
{ "title": "On-line active learning based on enhanced reliability concepts", "authors": [ "Edwin Lughofer" ], "abstract": "In this paper, we present a new methodology for conducting active learning in a single-pass on-line learning context, thus reducing the annotation effort for operators by selecting the most informative samples, i.e. those ones helping incremental, evolving classifiers most to improve their own predictive performance. Our approach will be based on certainty-based sample selection in connection with version-space reduction approach. Therefore, two new concepts regarding classifier's reliability in its predictions will be investigated and developed in connection with evolving fuzzy classifiers: conflict and ignorance. Conflict models the extent to which a new query point lies in the conflicting region between two or more classes. Ignorance represents the extent to which the new query point appears in an unexplored region of the feature space. The results based on real-world streaming classification data will show a stable high predictive quality of our approach, despite the fact that the requested number of class labels is decreased by up to 90%.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "raix3CvPd_K", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368805.pdf", "forum_link": "https://openreview.net/forum?id=raix3CvPd_K", "arxiv_id": null, "doi": null }
{ "title": "A case study on collective intelligence based on energy flow", "authors": [ "Kaveh Hassani", "Aliakbar Asgari", "Won-Sook Lee" ], "abstract": "In this paper, we propose a stochastic scheme for modeling a multi-species prey-predator artificial ecosystem in order to investigate the influence of energy flow on ecosystem lifetime and stability. Inhabitants of this environment are a few species of herbivore and carnivore birds. In this model, collective behavior emerges in terms of flocking, breeding, competing, resting, hunting, escaping, seeking, and foraging. Ecosystem is defined as a combination of prey and predator species with inter-competition among species within the same level of the food chain, and intra-competition among those belonging to different levels of the food chain. Some energy variables are also introduced as functions of behaviors to model the energy within the ecosystem. Experimental results of 11,000 simulations analyzed by Cox univariate analysis and hazard function suggest that only five corresponding energy variables out of eight aforementioned behaviors influence the ecosystem lifetime. Also, results of survival analysis show that among pairwise interactions between energy factors, only two interactions affect the system lifetime, including interaction between flocking and seeking energies, and interaction between flocking and hunting energies. These results match the observations of real life birds, which use flocking behavior for flexible movements, efficient foraging, social learning, and reducing predation risks.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "08i1HM0_Wx4", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368804.pdf", "forum_link": "https://openreview.net/forum?id=08i1HM0_Wx4", "arxiv_id": null, "doi": null }
{ "title": "Adaptive animation generation using web content mining", "authors": [ "Kaveh Hassani", "Won-Sook Lee" ], "abstract": "Creating 3D animation is a labor-intensive and time-consuming process requiring designers to learn and utilize a complex combination of menus, dialog boxes, buttons and manipulation interfaces for a given stand-alone animation design software. On the other hand, conceptual simplicity and naturalness of visualizing imaginations from lingual descriptions motivates researchers for developing automatic animation generation systems using natural language interfaces. In this research, we introduce an interactive and adaptive animation generation system that utilizes data-driven techniques to extract the required common-sense and domain-specific knowledge from web. This system is capable of creating 3D animation based on user's lingual commands. It uses the user interactions as a relevance feedback to learn the implicit design knowledge, correct the extracted knowledge, and manipulate the dynamics of the virtual world in an active and incremental manner. Moreover, system is designed based on a multi-agent methodology which provides it with distributed processing capabilities and cross-platform characteristics. In this paper, we will focus on information retrieval agent which is responsible for extracting numeric data utilized in object attributes, spatiotemporal relations, and environment dynamics using web mining techniques.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "uXjiUmY8xPc", "year": null, "venue": "EAIS 2012", "pdf_link": "https://ieeexplore.ieee.org/iel5/6225463/6232786/06232812.pdf", "forum_link": "https://openreview.net/forum?id=uXjiUmY8xPc", "arxiv_id": null, "doi": null }
{ "title": "Evolving activity recognition from sensor streams", "authors": [ "José Antonio Iglesias", "Francisco Javier Ordóñez", "Agapito Ledezma", "Paula de Toledo", "Araceli Sanchis" ], "abstract": "Recognizing people's activity automatically is an important task that needs to be tackled in order to face other more complex tasks such as action prediction, remote health monitoring, or interventions. Recent research on activity recognition has demonstrated that many different activities can be recognized. In most of these researches, the activities are previously predefined as statistic models over time. However, how people perform a specific activity is changing continuously. In this paper we present an approach for classifying different activities from sensor readings based on Evolving Fuzzy Systems (EFS). Thus, the model that describes an activity evolves according to the changes observed in how that activity is performed. This approach has been successfully tested on a real world domain using binary sensors data streams.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GS_Ln_8jIJD", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368805.pdf", "forum_link": "https://openreview.net/forum?id=GS_Ln_8jIJD", "arxiv_id": null, "doi": null }
{ "title": "A case study on collective intelligence based on energy flow", "authors": [ "Kaveh Hassani", "Aliakbar Asgari", "Won-Sook Lee" ], "abstract": "In this paper, we propose a stochastic scheme for modeling a multi-species prey-predator artificial ecosystem in order to investigate the influence of energy flow on ecosystem lifetime and stability. Inhabitants of this environment are a few species of herbivore and carnivore birds. In this model, collective behavior emerges in terms of flocking, breeding, competing, resting, hunting, escaping, seeking, and foraging. Ecosystem is defined as a combination of prey and predator species with inter-competition among species within the same level of the food chain, and intra-competition among those belonging to different levels of the food chain. Some energy variables are also introduced as functions of behaviors to model the energy within the ecosystem. Experimental results of 11,000 simulations analyzed by Cox univariate analysis and hazard function suggest that only five corresponding energy variables out of eight aforementioned behaviors influence the ecosystem lifetime. Also, results of survival analysis show that among pairwise interactions between energy factors, only two interactions affect the system lifetime, including interaction between flocking and seeking energies, and interaction between flocking and hunting energies. These results match the observations of real life birds, which use flocking behavior for flexible movements, efficient foraging, social learning, and reducing predation risks.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "pHa132zP9Su", "year": null, "venue": "EAIS 2015", "pdf_link": "https://ieeexplore.ieee.org/iel7/7361817/7368765/07368804.pdf", "forum_link": "https://openreview.net/forum?id=pHa132zP9Su", "arxiv_id": null, "doi": null }
{ "title": "Adaptive animation generation using web content mining", "authors": [ "Kaveh Hassani", "Won-Sook Lee" ], "abstract": "Creating 3D animation is a labor-intensive and time-consuming process requiring designers to learn and utilize a complex combination of menus, dialog boxes, buttons and manipulation interfaces for a given stand-alone animation design software. On the other hand, conceptual simplicity and naturalness of visualizing imaginations from lingual descriptions motivates researchers for developing automatic animation generation systems using natural language interfaces. In this research, we introduce an interactive and adaptive animation generation system that utilizes data-driven techniques to extract the required common-sense and domain-specific knowledge from web. This system is capable of creating 3D animation based on user's lingual commands. It uses the user interactions as a relevance feedback to learn the implicit design knowledge, correct the extracted knowledge, and manipulate the dynamics of the virtual world in an active and incremental manner. Moreover, system is designed based on a multi-agent methodology which provides it with distributed processing capabilities and cross-platform characteristics. In this paper, we will focus on information retrieval agent which is responsible for extracting numeric data utilized in object attributes, spatiotemporal relations, and environment dynamics using web mining techniques.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rRf45ynBG1c", "year": null, "venue": "EAIS 2011", "pdf_link": "https://ieeexplore.ieee.org/iel5/5936949/5945904/05945909.pdf", "forum_link": "https://openreview.net/forum?id=rRf45ynBG1c", "arxiv_id": null, "doi": null }
{ "title": "Using a map-based encoding to evolve plastic neural networks", "authors": [ "Paul Tonelli", "Jean-Baptiste Mouret" ], "abstract": "Many controllers for complex agents have been successfully generated by automatically desiging artificial neural networks with evolutionary algorithms. However, typical evolved neural networks are not able to adapt themselves online, making them unable to perform tasks that require online adaptation. Nature solved this problem on animals with plastic nervous systems. Inpired by neuroscience models of plastic neural-network, the present contribution proposes to use a combination of Hebbian learning, neuro-modulation and a a generative map-based encoding. We applied the proposed approach on a problem from operant conditioning (a Skinner box), in which numerous different association rules can be learned. Results show that the map-based encoding scaled up better than a classic direct encoding on this task. Evolving neural networks using a map-based generative encoding also lead to networks that works with most rule sets even when the evolution is done on a small subset of all the possible cases. Such a generative encoding therefore appears as a key to improve the generalization abilities of evolved adaptive neural networks.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PNroBN_gwR", "year": null, "venue": "EAIS 2022", "pdf_link": "https://ieeexplore.ieee.org/iel7/9787685/9787686/09787735.pdf", "forum_link": "https://openreview.net/forum?id=PNroBN_gwR", "arxiv_id": null, "doi": null }
{ "title": "Collision-Free Navigation using Evolutionary Symmetrical Neural Networks", "authors": [ "Hesham M. Eraqi", "Mena Nagiub", "Peter Sidra" ], "abstract": "Collision avoidance systems play a vital role in reducing the number of vehicle accidents and saving human lives. This paper extends the previous work using evolutionary neural networks for reactive collision avoidance. We are proposing a new method we have called symmetric neural networks. The method improves the model’s performance by enforcing constraints between the network weights which reduces the model optimization search space and hence, learns more accurate control of the vehicle steering for improved maneuvering. The training and validation processes are carried out using a simulation environment - the codebase is publicly available. Extensive experiments are conducted to analyze the proposed method and evaluate its performance. The method is tested in several simulated driving scenarios. In addition, we have analyzed the effect of the rangefinder sensor resolution and noise on the overall goal of reactive collision avoidance. Finally, we have tested the generalization of the proposed method. The results are encouraging; the proposed method has improved the model’s learning curve for training scenarios and generalization to the new test scenarios. Using constrained weights has significantly improved the number of generations required for the Genetic Algorithm optimization.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ObIremyIAVx", "year": null, "venue": "EANN 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=ObIremyIAVx", "arxiv_id": null, "doi": null }
{ "title": "A Robust, Quantization-Aware Training Method for Photonic Neural Networks", "authors": [ "A. Oikonomou", "Manos Kirtas", "Nikos Passalis", "George Mourgias-Alexandris", "Miltiadis Moralis-Pegios", "Nikos Pleros", "Anastasios Tefas" ], "abstract": "The computationally demanding nature of Deep Learning (DL) has fueled the research on neuromorphics due to their potential to provide high-speed and low energy hardware accelerators. To this end, neuromorphic photonics are increasingly gain attention since they can operate in very high frequencies with very low energy consumption. However, they also introduce new challenges in DL training and deployment. In this paper, we propose a novel training method that is able to compensate for quantization noise, which profoundly exists in photonic hardware due to analog-to-digital (ADC) and digital-to-analog (DAC) conversions, targeting photonic neural networks (PNNs) which employ easily saturated activation functions. The proposed method takes into account quantization during training, leading to significant performance improvements during the inference phase. We conduct evaluation experiments on both image classification and time-series analysis tasks, employing a wide range of existing photonic neuromorphic architectures. The evaluation experiments demonstrate the effectiveness of the proposed method when low-bit resolution photonic architectures are used, as well as its generalization ability.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "OVXT98ukS8", "year": null, "venue": "EANN 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=OVXT98ukS8", "arxiv_id": null, "doi": null }
{ "title": "Improving Face Pose Estimation Using Long-Term Temporal Averaging for Stochastic Optimization", "authors": [ "Nikolaos Passalis", "Anastasios Tefas" ], "abstract": "Among the most crucial components of an intelligent system capable of assisting drone-based cinematography is estimating the pose of the main actors. However, training deep CNNs towards this task is not straightforward, mainly due to the noisy nature of the data and instabilities that occur during the learning process, significantly slowing down the development of such systems. In this work we propose a temporal averaging technique that is capable of stabilizing as well as speeding up the convergence of stochastic optimization techniques for neural network training. We use two face pose estimation datasets to experimentally verify that the proposed method can improve both the convergence of training algorithms and the accuracy of pose estimation. This also reduces the risk of stopping the training process when a bad descent step was taken and the learning rate was not appropriately set, ensuring that the network will perform well at any point of the training process.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "6BjNumo16Rb", "year": null, "venue": "EANN 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=6BjNumo16Rb", "arxiv_id": null, "doi": null }
{ "title": "Predicting Stock Price Movement Using Financial News Sentiment", "authors": [ "Jiaying Gong", "Bradley Paye", "Gregory Kadlec", "Hoda Eldardiry" ], "abstract": "A central question in financial economics concerns the degree of informational efficiency. Violations of informational efficiency represent capital miss-allocations and potentially profitable trading opportunities. Market efficiency analyses have evolved to incorporate increasingly rich public information and innovative statistical methods to analyze this information. We propose an Automatic Crawling and Prediction System (ACPS) to 1) automatically crawl online media, 2) extract useful information from a rich set of financial news, and 3) predict future stock price movements. ACPS consists of a feature selection pipeline to select an optimal set of predictive features and a sentiment analysis model to measure sentence-level news sentiment. Generated features and news sentiment data are further processed via an ensemble model based on several machine learning and deep learning algorithms to generate forecasts. Results demonstrate the robustness of our proposed model in predicting the directional movement of daily stock prices. Specifically, the model consistently outperforms existing methods on single stock prediction and it performs well across all S&P 500 stocks. Our results indicate the potential value of rich text analysis and ensemble learning methods in a real-time trading context.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "uotiJIrOagh", "year": null, "venue": "EANN 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=uotiJIrOagh", "arxiv_id": null, "doi": null }
{ "title": "Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced Dataset and Benchmark", "authors": [ "Paschalis Lagias", "George D. Magoulas", "Ylli Prifti", "Alessandro Provetti" ], "abstract": "The paper introduces a new dataset to assess the performance of machine learning algorithms in the prediction of the seriousness of injury in a traffic accident. The dataset is created by aggregating publicly available datasets from the UK Department for Transport, which are drastically imbalanced with missing attributes sometimes approaching 50% of the overall data dimensionality. The paper presents the data analysis pipeline starting from the publicly available data of road traffic accidents and ending with predictors of possible injuries and their degree of severity. It addresses the huge incompleteness of public data with a MissForest model. The paper also introduces two baseline approaches to create injury predictors: a supervised artificial neural network and a reinforcement learning model. The dataset can potentially stimulate diverse aspects of machine learning research on imbalanced datasets and the two approaches can be used as baseline references when researchers test more advanced learning algorithms in this area.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "WQxLJMkdgv", "year": null, "venue": "EANN 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=WQxLJMkdgv", "arxiv_id": null, "doi": null }
{ "title": "Knowledge Clustering Using a Neural Network in a Course on Medical-Surgical Nursing", "authors": [ "José Luis Fernández Alemán", "Chrisina Jayne", "Ana Belén Sánchez García", "Juan Manuel Carrillo de Gea", "José Ambrosio Toval Álvarez" ], "abstract": "This paper presents a neural network-based intelligent data analysis for knowledge clustering in an undergraduate nursing course. A MCQ (Multiple Choice Question) test was performed to evaluate medical-surgical nursing knowledge in a second-year course. A total of 23 pattern groups were created from the answers of 208 students. Data collected were used to provide customized feedback which guide students towards a greater understanding of particular concepts. The pattern groupings can be integrated with an on-line (MCQ) system for training purposes.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "gpE8rXRMAr9", "year": null, "venue": "EANN 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=gpE8rXRMAr9", "arxiv_id": null, "doi": null }
{ "title": "Detection of Shocking Images as One-Class Classification Using Convolutional and Siamese Neural Networks", "authors": [ "Pavel Gulyaev", "Andrey Filchenkov" ], "abstract": "Not safe for work content automatic detection is a serious challenge for social media due to overwhelming growth of uploaded images, gifs and videos. This paper focuses on shocking images automatic detection by convolutional neural networks. It was considered that the correct recognition of the shocking class is more important than the non-shocking one. Binary classification by a convolutional network that training during operation has been used as a baseline solution. However, this solution has two drawbacks: the network highlights incorrect features of non-shocking images (infinite class) and tends to forget rare subclasses of shocking images, which is unacceptable. To eliminate the first drawback, we approach this problem as a one-class classification with having in mind that a “non-shocking” image can be defined only via contradiction with a shocking one. This method is based on using sparse autoencoders build on top of a pretrained convolutional neural network and is not trained during operation. To eliminate the second drawback, we memorized vectors of images that were incorrectly classified during operation. A trained siamese network during the prediction is used to search for similar images in the database. In the case of an incorrect prediction by the combined model, vectors of images are added to the database and the siamese network is trained on them. This method allows you to minimize the number of errors in rare subclasses identified only during the operation phase of the model.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "e4sZVqnNZQZ", "year": null, "venue": "EANN 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=e4sZVqnNZQZ", "arxiv_id": null, "doi": null }
{ "title": "Evaluating the Transferability of Personalised Exercise Recognition Models", "authors": [ "Anjana Wijekoon", "Nirmalie Wiratunga" ], "abstract": "Exercise Recognition (ExR) is relevant in many high impact domains, from healthcare to recreational activities to sports sciences. Like Human Activity Recognition (HAR), ExR faces many challenges when deployed in the real-world. For instance, typical lab performances of Machine Learning (ML) models, are hard to replicate, due to differences in personal nuances, traits and ambulatory rhythms. Thus effective transferability of a trained ExR model, depends on its ability to adapt and personalise to a new user or a user group. This calls for new experimental design strategies that are person-aware, and able to organise train and test data differently from standard ML practice. Specifically, we look at person-agnostic and person-aware methods of train-test data creation, and compare them to identify best practices on a comparative study of personalised ExR model transfer. Our findings show that ExR when compared to results with other HAR tasks, to be a far more challenging personalisation problem and also confirms the utility of metric learning algorithms for personalised model transfer.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NTsNgox-Irk", "year": null, "venue": "EANN 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=NTsNgox-Irk", "arxiv_id": null, "doi": null }
{ "title": "A Genetic Algorithm for Discovering Linguistic Communities in Spatiosocial Tensors with an Application to Trilingual Luxemburg", "authors": [ "Georgios Drakopoulos", "Fotini Stathopoulou", "Giannis Tzimas", "Michael Paraskevas", "Phivos Mylonas", "Spyros Sioutas" ], "abstract": "Multimodal social networks are omnipresent in Web 2.0 with virtually every human communication action taking place there. Nonetheless, language remains by far the main premise such communicative acts unfold upon. Thus, it is statutory to discover language communities especially in social data stemming from historically multilingual countries such as Luxemburg. An adjacency tensor is especially suitable for representing such spatiosocial data. However, because of its potentially large size, heuristics should be developed for locating community structure efficiently. Linguistic structure discovery has a plethora of applications including digital marketing and online political campaigns, especially in case of prolonged and intense cross-linguistic contact. This conference paper presents TENSOR-G, a flexible genetic algorithm for approximate tensor clustering along with two alternative fitness functions derived from language variation or diffusion properties. The Kruskal tensor decomposition serves as a benchmark and the results obtained from a set of trilingual Luxemburgian tweets are analyzed with linguistic criteria.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "iKHJd73FvH", "year": null, "venue": "EANN 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=iKHJd73FvH", "arxiv_id": null, "doi": null }
{ "title": "A Novel CNN-LSTM Hybrid Architecture for the Recognition of Human Activities", "authors": [ "Sofia Stylianou-Nikolaidou", "Ioannis Vernikos", "Eirini Mathe", "Evaggelos Spyrou", "Phivos Mylonas" ], "abstract": "The problem of human activity recognition (HAR) has been increasingly attracting the efforts of the research community, having several applications. In this paper we propose a multi-modal approach addressing the task of video-based HAR. Our approach uses three modalities, i.e., raw RGB video data, depth sequences and 3D skeletal motion data. The latter are transformed into a 2D image representation into the spectral domain. In order to extract spatio-temporal features from the available data, we propose a novel hybrid deep neural network architecture that combines a Convolutional Neural Network (CNN) and a Long-Short Term Memory (LSTM) network. We focus on the tasks of recognition of activities of daily living (ADLs) and medical conditions and we evaluate our approach using two challenging datasets.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NYAZqTVj-9", "year": null, "venue": "EANN 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=NYAZqTVj-9", "arxiv_id": null, "doi": null }
{ "title": "Recognizing Human Actions Using 3D Skeletal Information and CNNs", "authors": [ "Antonios Papadakis", "Eirini Mathe", "Ioannis Vernikos", "Apostolos Maniatis", "Evaggelos Spyrou", "Phivos Mylonas" ], "abstract": "In this paper we present an approach for the recognition of human actions targeting at activities of daily living (ADLs). Skeletal information is used to create images capturing the motion of joints in the 3D space. These images are then transformed to the spectral domain using 4 well-known image transforms. A deep Convolutional Neural Network is trained on those images. Our approach is thoroughly evaluated using a well-known, publicly available challenging dataset and for a set of actions that resembles to common ADLs, covering both cross-view and cross-subject cases.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_N1tJPJd0D", "year": null, "venue": "EANN 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_N1tJPJd0D", "arxiv_id": null, "doi": null }
{ "title": "Human Activity Recognition Under Partial Occlusion", "authors": [ "Ioannis-Aris Kostis", "Eirini Mathe", "Evaggelos Spyrou", "Phivos Mylonas" ], "abstract": "One of the major challenges in Human Activity Recognition (HAR) using cameras, is occlusion of one or more body parts. However, this problem is often underestimated in contemporary research works, wherein training and evaluation is based on datasets shot under laboratory conditions, i.e., without some kind of occlusion. In this work we propose an approach for HAR in the presence of partial occlusion, i.e., in case of up to two occluded body parts. We solve this problem using regression, performed by a deep neural network. That is, given an occluded sample, we attempt to reconstruct the missing information regarding the motion of the occluded part(s). We evaluate our approach using a publicly available human motion dataset. Our experimental results indicate a significant increase of performance, when compared to a baseline approach, wherein a network that has been trained using non-occluded samples is evaluated using occluded samples. To the best of our knowledge, this is the first research work that tackles the problem of HAR under occlusion as a regression problem.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JfELsCjNNR2", "year": null, "venue": "EANN 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=JfELsCjNNR2", "arxiv_id": null, "doi": null }
{ "title": "Applying Kernel Methods on Protein Complexes Detection Problem", "authors": [ "Charalampos N. Moschopoulos", "Griet Laenen", "George D. Kritikos", "Yves Moreau" ], "abstract": "During the last years, various methodologies have made possible the detection of large parts of the protein interaction network of various organisms. However, these networks are containing highly noisy data, degrading the quality of information they carry. Various weighting schemes have been applied in order to eliminate noise from interaction data and help bioinformaticians to extract valuable information such as the detection of protein complexes. In this contribution, we propose the addition of an extra step on these weighting schemes by using kernel methods to better assess the reliability of each pairwise interaction. Our experimental results prove that kernel methods clearly help the elimination of noise by producing improved results on the protein complexes detection problem.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]