metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "I8RXqjF9eL1",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=I8RXqjF9eL1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1h2",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3zWCTblljh",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=3zWCTblljh",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Discovery of Weather Forecast Web Resources Based on Ontology and Content-Driven Hierarchical Classification",
"authors": [
"Anastasia Moumtzidou",
"Stefanos Vrochidis",
"Ioannis Kompatsiaris"
],
"abstract": "Monitoring of environmental information is critical both for the evolvement of important environmental events, as well as for everyday life activities. In this work, we focus on the discovery of web resources that provide weather forecasts. To this end we submit domain-specific queries to a general purpose search engine and post process the results by introducing a hierarchical two layer classification scheme. The top layer includes two classification models: a) the first is trained using ontology concepts as textual features; b) the second is trained using textual features that are learned from a training corpus. The bottom layer includes a hybrid classifier that combines the results of the top layer. We evaluate the proposed technique by discovering weather forecast websites for cities of Finland and compare the results with previous works.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "445OCWIKpt",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=445OCWIKpt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Query Expansion with a Little Help from Twitter",
"authors": [
"Ioannis Anagnostopoulos",
"Gerasimos Razis",
"Phivos Mylonas",
"Christos-Nikolaos Anagnostopoulos"
],
"abstract": "With the advent and rapid spread of microblogging services, web information management finds a new research topic. Although classical information retrieval methods and techniques help search engines and services to present an adequate precision in lower recall levels (top-k results), the constantly evolving information needs of microblogging users demand a different approach, which has to be adapted to the dynamic nature of On-line Social Networks (OSNs). In this work, we use Twitter as microblogging service, aiming to investigate the query expansion provision that can be extracted from large graphs, and compare it against classical query expansion methods that require mainly prior knowledge, such as browsing history records or access and management of search logs. We provide a direct comparison with mainstream media services, such as Google, Yahoo!, Bing, NBC and Reuters, while we also evaluate our approach by subjective comparisons in respect to the Google Hot Searches service.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tcv0VtxlOCs",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=tcv0VtxlOCs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Local Binary Patterns and Neural Networks for No-Reference Image and Video Quality Assessment",
"authors": [
"Marko Panic",
"Dubravko Culibrk",
"Srdjan Sladojevic",
"Vladimir S. Crnojevic"
],
"abstract": "In the modern world, where multimedia is predicted to form 86% of traffic transmitted over the telecommunication networks in the near future, content providers are looking to shift towards Quality of Experience, rather than Quality of Service in multimedia delivery. Thus, no-reference image quality assessment and the related video quality assessment remaining open research problem, with significant market potential. In this paper we describe a study focused on evaluating the applicability of Local Binary Patterns (LBP) as features and neural networks as estimators for image quality assessment. We focus on blockiness artifacts, as a prominent effect in all block-based coding approaches and the dominant artifact in occurring in videos coded with state-of-the-art video codecs (MPEG-4, H.264, HVEC). In this initial study we show how an LBP-inspired approach, tuned to this particular effect, can be efficiently used to predict the MOS of JPEG coded images. The proposed approach is evaluated on a well-known public database and against widely-used features. The results presented in the paper show that the approach achieves superior performance, which forms a sound basis for future research aimed at video quality assessment and precise blocking artifact detection with sub-frame precision.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zV_Znktlp25",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=zV_Znktlp25",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Genetic Algorithm for Pancreatic Cancer Diagnosis",
"authors": [
"Charalampos N. Moschopoulos",
"Dusan Popovic",
"Alejandro Sifrim",
"Grigorios N. Beligiannis",
"Bart De Moor",
"Yves Moreau"
],
"abstract": "Pancreatic cancer is one of the leading causes of cancer-related death in the industrialized countries and it has the least favorable prognosis among various cancer types. In this study we aim to facilitate early detection of the pancreatic cancer by finding minimal set of genetic biomarkers that can be used for establishing diagnosis. We propose a genetic algorithm and we test it on gene expression data of 36 pancreatic ductal adenocarcinoma tumors and matching normal pancreatic tissue samples. Our results show that a minimum group of genes are able to constitute a high reliability pancreatic cancer predictor.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jBwS64c7y43",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=jBwS64c7y43",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Hybrid Approach to Feature Ranking for Microarray Data Classification",
"authors": [
"Dusan Popovic",
"Alejandro Sifrim",
"Charalampos N. Moschopoulos",
"Yves Moreau",
"Bart De Moor"
],
"abstract": "We present a novel approach to multivariate feature ranking in context of microarray data classification that employs a simple genetic algorithm in conjunction with Random forest feature importance measures. We demonstrate performance of the algorithm by comparing it against three popular feature ranking and selection methods on a colon cancer recurrence prediction problem. In addition, we investigate biological relevance of the selected features, finding functional associations of corresponding genes with cancer.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xWS2fHPvt8",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xWS2fHPvt8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Social and Smart: Towards an Instance of Subconscious Social Intelligence",
"authors": [
"Manuel Graña",
"Bruno Apolloni",
"Maurizio Fiasché",
"G. Galliani",
"C. Zizzo",
"George Caridakis",
"Georgios Siolas",
"Stefanos D. Kollias",
"F. Barriento",
"S. San Jose"
],
"abstract": "The Social and Smart (SandS) project aims to lay the foundations for a social network of home appliance users endowed with a layer of intelligent systems that must be able to produce new solutions to new problems on the basis of the knowledge accumulated by the social network players. The system is not a simple recollection of tested appliance use recipes, but it will have the ability generate new or refine existing recipes trying to satisfy user demands, and to perform fine tuning of recipes on the basis of user satisfaction by a hidden reinforcement learning process. This paper aims to advance on the specification of diverse aspects and roles of the system architecture, to get a clearer picture of module interactions and duties, along with data transfer and transformation paths.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "615s6AX92q",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=615s6AX92q",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Particle Swarm Optimization (PSO) Model for Scheduling Nonlinear Multimedia Services in Multicommodity Fat-Tree Cloud Networks",
"authors": [
"Ioannis M. Stephanakis",
"Ioannis P. Chochliouros",
"George Caridakis",
"Stefanos D. Kollias"
],
"abstract": "Cloud computing delivers computing services over virtualized networks to many end-users. Virtualized networks are characterized by such attributes as on-demand self-service, broad network access, resource pooling, rapid and elastic resource provisioning and metered services at various qualities. Cloud networks provide data as well as multimedia and video services. They are classified into private cloud networks, public cloud networks and hybrid cloud networks. Linear video services include broadcasting and in-stream video that may be viewed in a video player whereas non-linear video services include a combination of in-stream video with on-demand services, which are originated from distributed servers in the network and deliver interactive and pay-per view content. Furthermore heterogeneous delivery networks that include fixed and mobile internet infrastructures require that adaptive video streaming should be carried out at network boundaries based on such protocols as HTTP Live Streaming (HLS). Distributed processing of nonlinear video services in cloud environments is addressed in the present work by defining Distributed Acyclic Graphs (DAG) models for multimedia processes executed by a set of non-locally confined virtual machines. A novel discrete multivalue Particle Swarm Optimization (PSO) algorithm is proposed in order to optimize task scheduling and workflow. Numerical simulations regarding such measures as Schedule-Length-Ratio (SLR) and Speedup are given for novel fat-tree cloud architectures.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YoRRG9yt_zW",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=YoRRG9yt_zW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Intelligent and Adaptive Pervasive Future Internet: Smart Cities for the Citizens",
"authors": [
"George Caridakis",
"Georgios Siolas",
"Phivos Mylonas",
"Stefanos D. Kollias",
"Andreas Stafylopatis"
],
"abstract": "Current article discusses the human centered perspective adopted in the European project SandS within the Internet of Things (IoT) framework. SandS is a complete ecosystem of users within a social network developing a collective intelligence and adapting its operation through appropriately processed feedback. In the research work discussed in this paper we will investigate SandS from the user perspective and how users can be modeled through a number of fuzzy knowledge formalism through stereotypical user profiles. Additionally, context modeling in pervasive computing systems and especially in the SandS smart home paradigm is examined through appropriate representation of context cues during overall interaction.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "It0615Ur3h",
"year": null,
"venue": "EANN Workshops 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=It0615Ur3h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An overview of context types within multimedia and social computing",
"authors": [
"Phivos Mylonas"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NffPcLQsCXJ",
"year": null,
"venue": "EANN Workshops 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=NffPcLQsCXJ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Smart home context awareness based on Smart and Innovative Cities",
"authors": [
"Aggeliki Vlachostergiou",
"Georgios Stratogiannis",
"George Caridakis",
"Georgios Siolas",
"Phivos Mylonas"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Ui-SOAfE2h",
"year": null,
"venue": "EANN (2) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Ui-SOAfE2h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Novel Hierarchical Approach to Ranking-Based Collaborative Filtering",
"authors": [
"Athanasios N. Nikolakopoulos",
"Marianna A. Kouneli",
"John D. Garofalakis"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rpMq6oQjUcs",
"year": null,
"venue": "EANN Workshops 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rpMq6oQjUcs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Detecting Irony on Greek Political Tweets: A Text Mining Approach",
"authors": [
"Basilis Charalampakis",
"Dimitris Spathis",
"Elias Kouslis",
"Katia Kermanidis"
],
"abstract": "The present work describes the classification schema for irony detection in Greek political tweets. The proposed approach relies on limited labeled training data, and its performance on a larger unlabeled dataset is evaluated qualitatively (implicitly) via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. The machine learning results on the labeled dataset were highly encouraging and uncovered a trend whereby the volume of ironic tweets can predict the fluctuation from previous elections.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NtRbCoARegu",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=NtRbCoARegu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Boosting Simplified Fuzzy Neural Networks",
"authors": [
"Alexey Natekin",
"Alois C. Knoll"
],
"abstract": "Fuzzy neural networks are a powerful machine learning technique, that can be used in a large number of applications. Proper learning of fuzzy neural networks requires a lot of computational effort and the fuzzy-rule designs of these networks suffer from the curse of dimensionality. To alleviate these problems, a simplified fuzzy neural network is presented. The proposed simplified network model can be efficiently initialized with considerably high predictive power. We propose the ensembling approach, thus, using the new simplified neural network models as the type of a general-purpose fuzzy base-learner. The new base-learner properties are analyzed and the practical results of the new algorithm are presented on the robotic hand controller application.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xJf-c40Kb7",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=xJf-c40Kb7",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Impact of Sampling on Neural Network Classification Performance in the Context of Repeat Movie Viewing",
"authors": [
"Elena Fitkov-Norris",
"Sakinat Oluwabukonla Folorunso"
],
"abstract": "This paper assesses the impact of different sampling approaches on neural network classification performance in the context of repeat movie going. The results showed that synthetic oversampling of the minority class, either on its own or combined with under-sampling and removal of noisy examples from the majority class offered the best overall performance. The identification of the best sampling approach for this data set is not trivial since the alternatives would be highly dependent on the metrics used, as the accuracy ranks of the approaches did not agree across the different accuracy measures used. In addition, the findings suggest that including examples generated as part of the oversampling procedure in the holdout sample, leads to a significant overestimation of the accuracy of the neural network. Further research is necessary to understand the relationship between degree of synthetic over-sampling and the efficacy of the holdout sample as a neural network accuracy estimator.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "X-kBFqi8Osc",
"year": null,
"venue": "EANN Workshops 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=X-kBFqi8Osc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Comparison of three classifiers for breast cancer outcome prediction",
"authors": [
"Noa Eyal",
"Mark Last",
"Eitan Rubin"
],
"abstract": "Predicting the outcome of cancer is a challenging task; researchers have an interest in trying to predict the relapse-free survival of breast cancer patients based on gene expression data. Data mining methods offer more advanced approaches for dealing with survival data. The main objective in cancer treatment is to improve overall survival or, at the very least, the time to relapse (\"relapse-free survival\"). In this work, we compare the performance of three popular interpretable classifiers (decision tree, probabilistic neural networks and Naïve Bayes) for the task of classifying breast cancer patients into recurrence risk groups (low or high risk of recurrence within 5 or 10 years). For the 5-year recurrence risk prediction, the highest prediction accuracy was reached by the probabilistic neural networks classifier (Acc = 76.88% ± 1.09%, AUC=77.41%). For the 10-year recurrence risk prediction, the decision tree classifier and the probabilistic neural networks presented similar prediction accuracies (70.40% ± 1.36% and 70.50% ± 1.13%, respectively). However, while the PNN classifier achieved this accuracy using only 10 features with the highest information gain, the decision tree classifier needed 100 features to achieve comparable accuracy and its AUC was significantly lower (66.4% vs. 77.1%).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rW5Iwg97dgw",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=rW5Iwg97dgw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "SCH-EGA: An Efficient Hybrid Algorithm for the Frequency Assignment Problem",
"authors": [
"Shaohui Wu",
"Gang Yang",
"Jieping Xu",
"Xirong Li"
],
"abstract": "This paper proposes a hybrid stochastic competitive Hopfield neural network-efficient genetic algorithm (SCH-EGA) approach to tackle the frequency assignment problem (FAP). The objective of FAP is to minimize the cochannel interference between satellite communication systems by rearranging the frequency assignments so that they can accommodate the increasing demands. In fact, as SCH-EGA algorithm owns the good adaptability, it can not only deal with the frequency assignment problem, but also cope with the problems of clustering, classification, the maximum clique problem and so on. In this paper, we first propose five optimal strategies to build an efficient genetic algorithm(EGA) which is the component of our hybrid algorithm. Then we explore different hybridizations between the Hopfield neural network and EGA. With the help of hybridization, SCH-EGA makes up for the defects in the Hopfield neural network and EGA while fully using the advantages of the two algorithms.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "H0X6MDYw-j_",
"year": null,
"venue": "EANN (1) 2013",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=H0X6MDYw-j_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "SCH-EGA: An Efficient Hybrid Algorithm for the Frequency Assignment Problem",
"authors": [
"Shaohui Wu",
"Gang Yang",
"Jieping Xu",
"Xirong Li"
],
"abstract": "This paper proposes a hybrid stochastic competitive Hopfield neural network-efficient genetic algorithm (SCH-EGA) approach to tackle the frequency assignment problem (FAP). The objective of FAP is to minimize the cochannel interference between satellite communication systems by rearranging the frequency assignments so that they can accommodate the increasing demands. In fact, as SCH-EGA algorithm owns the good adaptability, it can not only deal with the frequency assignment problem, but also cope with the problems of clustering, classification, the maximum clique problem and so on. In this paper, we first propose five optimal strategies to build an efficient genetic algorithm(EGA) which is the component of our hybrid algorithm. Then we explore different hybridizations between the Hopfield neural network and EGA. With the help of hybridization, SCH-EGA makes up for the defects in the Hopfield neural network and EGA while fully using the advantages of the two algorithms.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "adXd_elcCv",
"year": null,
"venue": "EANN Workshops 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=adXd_elcCv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Detecting Irony on Greek Political Tweets: A Text Mining Approach",
"authors": [
"Basilis Charalampakis",
"Dimitris Spathis",
"Elias Kouslis",
"Katia Kermanidis"
],
"abstract": "The present work describes the classification schema for irony detection in Greek political tweets. The proposed approach relies on limited labeled training data, and its performance on a larger unlabeled dataset is evaluated qualitatively (implicitly) via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. The machine learning results on the labeled dataset were highly encouraging and uncovered a trend whereby the volume of ironic tweets can predict the fluctuation from previous elections.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "U0Lll9DN0m",
"year": null,
"venue": "e-Energy 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=U0Lll9DN0m",
"arxiv_id": null,
"doi": null
}
|
{
"title": "UrJar: A Device to Address Energy Poverty Using E-Waste",
"authors": [
"Vikas Chandan",
"Mohit Jain",
"Harshad Khadilkar",
"Zainul Charbiwala",
"Anupam Jain",
"Sunil Kumar Ghai",
"Rajesh Kunnath",
"Deva P. Seetharam"
],
"abstract": "A significant portion of the population in India does not have access to reliable electricity. At the same time, is a rapid penetration of Lithium Ion battery-operated devices such as laptops, both in the developing and developed world. This generates a significant amount of electronic waste (e-waste), especially in the form of discarded Lithium Ion batteries. In this work, we present UrJar, a device which uses re-usable Lithium Ion cells from discarded laptop battery packs to power low energy DC devices. We describe the construction of the device followed by findings from field deployment studies in India. The participants appreciated the long duration of backup power provided by the device to meet their lighting requirements. Through our work, we show that UrJar has the potential to channel e-waste towards the alleviation of energy poverty, thus simultaneously providing a sustainable solution for both problems. Mode details of this work are provide in [3].",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UROBiQEOLP",
"year": null,
"venue": "Submitted to ICLR 2023",
"pdf_link": "/pdf/b4aeb4ae7db4f697c84669a0909e120b0a92b337.pdf",
"forum_link": "https://openreview.net/forum?id=UROBiQEOLP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-Forcing: Improving Autoregressive Models by Treating it as an Energy-Based One",
"authors": [
"Yezhen Wang",
"Tong Che",
"Bo Li",
"Kaitao Song",
"Hengzhi Pei",
"Yoshua Bengio",
"Dongsheng Li"
],
"abstract": "Autoregressive generative models are commonly used to solve tasks involving sequential data. They have, however, been plagued by a slew of inherent flaws due to the intrinsic characteristics of chain-style conditional modeling (e.g., exposure bias or lack of long-range coherence), severely limiting their ability to model distributions properly. In this paper, we propose a unique method termed E-Forcing for training autoregressive generative models that takes advantage of a well-designed energy-based learning objective. By leveraging the extra degree of freedom of the softmax operation, we are allowed to make the autoregressive model itself an energy-based model for measuring the likelihood of input without introducing any extra parameters. Furthermore, we show that with the help of E-Forcing, we can alleviate the above flaws for autoregressive models. Extensive empirical results, covering numerous benchmarks demonstrate the effectiveness of the proposed approach.",
"keywords": [
"autoregressive models",
"exposure bias",
"language modeling",
"neural machine translation"
],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ULCd8sCX320",
"year": null,
"venue": "AISTATS 2020",
"pdf_link": "http://proceedings.mlr.press/v108/koskela20b/koskela20b.pdf",
"forum_link": "https://openreview.net/forum?id=ULCd8sCX320",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Computing Tight Differential Privacy Guarantees Using FFT",
"authors": [
"Antti Koskela",
"Joonas Jälkö",
"Antti Honkela"
],
"abstract": "Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using (e.d)-DP. In this paper, we propose a numerical accountant for...",
"keywords": [],
"raw_extracted_content": "Computing Tight Differential Privacy Guarantees Using FFT\nAntti Koskela Joonas Jälkö Antti Honkela\nUniversity of Helsinki Aalto University University of Helsinki\nAbstract\nDifferentially private (DP) machine learning\nhas recently become popular. The privacy\nloss of DP algorithms is commonly reported\nusing (\";\u000e)-DP. In this paper, we propose\na numerical accountant for evaluating the\nprivacy loss for algorithms with continuous\none dimensional output. This accountant\ncan be applied to the subsampled multidi-\nmensional Gaussian mechanism which under-\nlies the popular DP stochastic gradient de-\nscent. The proposed method is based on a\nnumerical approximation of an integral for-\nmula which gives the exact (\";\u000e)-values. The\napproximation is carried out by discretising\nthe integral and by evaluating discrete con-\nvolutions using the fast Fourier transform\nalgorithm. We give both theoretical error\nbounds and numerical error estimates for the\napproximation. Experimental comparisons\nwith state-of-the-art techniques demonstrate\nsignificant improvements in bound tightness\nand/or computation time.\n1 Introduction\nDifferential privacy (DP) (Dwork et al., 2006) has\nclearly been established as the dominant paradigm for\nprivacy-preserving machine learning. Early work on\nDP machine learning focused on single shot pertur-\nbations for convex problems (Chaudhuri et al., 2011),\nwhile contemporary research has focused on iterative\nalgorithms such as DP stochastic gradient descent\n(SGD) (Rajkumar and Agarwal, 2012; Song et al.,\n2013; Abadi et al., 2016b) .\nEvaluating the privacy loss of an iterative algorithm\nis based on the composition theory of DP. The so-\nProceedings of the 23rdInternational Conference on Artifi-\ncial Intelligence and Statistics (AISTATS) 2020, Palermo,\nItaly. PMLR: Volume 108. Copyright 2020 by the au-\nthor(s).called advanced composition theorem of (Dwork et al.,\n2010) showed how to trade decreased \"with slightly\nincreased\u000ein(\u000f;\u000e)-DP.Thiswasfurtherimprovede.g.\nby Kairouz et al. (2017). The privacy amplification\nby subsampling (Chaudhuri and Mishra, 2006; Beimel\net al., 2013; Bassily et al., 2014; Wang et al., 2015) is\nanother component that has been studied to improve\nthe privacy bounds.\nA major breakthrough in obtaining tighter compo-\nsition bounds came from using the entire privacy\nloss profile of DP algorithms instead of single (\";\u000e)-\nvalues. This was first introduced by the moments ac-\ncountant (Abadi et al., 2016b). The development of\nRényi differential privacy (RDP) (Mironov, 2017) al-\nlowed tight bounds on the privacy cost of composi-\ntion, and recently proposed amplification theorems for\nRDP(Balleetal.,2018;Wangetal.,2019)showedhow\nsubsampling affects the privacy cost of RDP. Zhu and\nWang (2019) gave tight RDP bounds for the Poisson\nsubsampling method.\nUsing the recently introduced privacy loss distribu-\ntion (PLD) formalism (Sommer et al., 2019), we com-\npute tight (\";\u000e)-DP bounds on the composition of sub-\nsampled Gaussian mechanisms, using discrete Fourier\ntransforms to evaluate the required convolutions. We\nshownumericallythattheachievedprivacyboundsare\ntighter than those obtained by Rényi DP compositions\nand the moments accountant.\nWithin this computational framework, in addition\nto the commonly considered Poisson subsampling\nmethod, we are also able to compute tight privacy\nboundsforthesubsamplingwithreplacementandsub-\nsampling without replacement methods.\n2 Differential Privacy\nWe first recall some basic definitions of differential pri-\nvacy (Dwork and Roth, 2014). We use the following\nnotation. An input dataset containing Ndata points\nis denoted as X= (x1;:::;xN)2XN, wherexi2X,\n1\u0014i\u0014N.\nDefinition 1. We say two datasets XandYare\nComputing Tight Differential Privacy Guarantees Using FFT\nneighbours in remove/add relation if you get one by\nremoving/adding an element from/to to other and de-\nnote it with\u0018R. We sayXandYare neighbours in\nsubstitute relation if you get one by substituting one\nelement in the other. We denote this with \u0018S.\nDefinition 2. Let\" >0and\u000e2[0;1]. Let\u0018define\na neighbouring relation. Mechanism M:XN!Ris\n(\";\u000e;\u0018)-DP if for every X\u0018Yand every measurable\nsetE\u001aRit holds that\nPr(M(X)2E)\u0014e\"Pr(M(Y)2E) +\u000e:\nWhen the relation is clear from context or irrelevant,\nwe will abbreviate it as (\";\u000e)-DP. We callMtightly\n(\";\u000e;\u0018)-DP, if there does not exist \u000e0< \u000esuch that\nMis(\";\u000e0;\u0018)-DP.\n3 Privacy loss distribution\nWe first introduce the basic tool for obtaining tight\nprivacy bounds: the privacy loss distribution (PLD).\nThe results in this section can be seen as continuous\nversions of their discrete counterparts given by Meiser\nandMohammadi(2018)andSommeretal.(2019). De-\ntailed proofs are given in Appendix. The results apply\nfor both neighbouring relations \u0018Sand\u0018R. We focus\non mechanisms of the following form.\nDefinition 3. LetM:XN!Rbe a randomised\nmechanism and let X\u0018Y. LetfX(t)denote the den-\nsity function ofM(X)andfY(t)the density function\nofM(Y). AssumefX(t)>0andfY(t)>0for all\nt2R. We define the privacy loss function of fXover\nfYas\nLX=Y(t) = logfX(t)\nfY(t):\nThe following gives the definition of the privacy loss\ndistribution via its density function for differentiable\nprivacy loss functions. We note that the assumptions\nhold especially for the subsampled Gaussian mecha-\nnism which is considered in Sec. 6.\nDefinition 4. SupposeLX=Y :R!D,D\u001aRis\na continuously differentiable bijective function. The\nprivacy loss distribution (PLD) of M(X)overM(Y)\nis defined to be a random variable which has the density\nfunction\n!X=Y(s) =(\nfX\u0000\nL\u00001\nX=Y(s)\u0001dL\u00001\nX=Y(s)\nds; s2LX=Y(R);\n0; else:\nFor the discrete valued versions of the following result,\nsee (Sommer et al., 2019, Lemmas 5 and 10).\nLemma 5. Assume (\";1)\u001aLX=Y(R).Mis tightly\n(\";\u000e)-DP for\n\u000e(\") = maxf\u000eX=Y(\");\u000eY=X(\")g;where\n\u000eX=Y(\") =Z1\n\"(1\u0000e\"\u0000s)!X=Y(s) ds;\nand similarly for \u000eY=X(\").\nThePLDformalismisessentiallybasedonLemmaA.2\nwhich states that the mechanism Mis tightly (\";\u000e)-\nDP with\n\u000e(\") = max\nX\u0018Y(Z\nRmaxffX(t)\u0000e\"fY(t);0gdt;\nZ\nRmaxffY(t)\u0000e\"fX(t);0gdt)\n:\nThe integral representation of Lemma 5 is then ob-\ntained by change of variables. Denoting s=LX=Y(t),\nit clearly holds that fY(t) = e\u0000sfX(t)and\nmaxffX(t)\u0000e\"fY(t);0g\n=(\n(1\u0000e\"\u0000s)fX(t);ifs>\";\n0; otherwise.\nBy change of variables t=L\u00001\nX=Y(s), we obtain the\nrepresentation of Lemma 5.\nWe get the tight privacy guarantee for compositions\nfrom a continuous counterpart of the results given by\nSommer et al. (2019, Thm. 1).\nTheorem 6. Considerkconsecutive applications\nof a mechanism M. Let\" > 0. The compo-\nsition is tightly (\";\u000e)-DP for\u000egiven by\u000e(\") =\nmaxf\u000eX=Y(\");\u000eY=X(\")g, where\n\u000eX=Y(\") =Z1\n\"(1\u0000e\"\u0000s)\u0000\n!X=Y\u0003k!X=Y\u0001\n(s) ds;\nwhere!X=Y\u0003k!X=Ydenotes the k-fold convolution of\n!X=Y(a similar formula holds for \u000eY=X(\")).\n4 The discrete Fourier transform\nThe discrete Fourier transform Fand its inverseF\u00001\nare linear operators Cn!Cnthat decompose a com-\nplex vector into a Fourier series, or reconstruct it from\nits Fourier series. Suppose x= (x0;:::;xn\u00001);w=\n(w0;:::;wn\u00001)2Rn. Then,FandF\u00001are defined\nas (Stoer and Bulirsch, 2013)\n(Fx)k=Xn\u00001\nj=0xje\u0000i2\u0019kj=n;\n(F\u00001w)k=1\nnXn\u00001\nj=0wjei2\u0019kj=n:\nEvaluatingFxandF\u00001wtakesO(n2)operations,\nhowever evaluation via the Fast Fourier Transform\nAntti Koskela, Joonas Jälkö, Antti Honkela\n(FFT) (Cooley and Tukey, 1965) reduces the compu-\ntational cost to O(nlogn).\nThe convolution theorem (Stockham Jr, 1966) states\nthat for periodic discrete convolutions it holds that\nXn\u00001\ni=0viwk\u0000i=F\u00001(Fv\fFw);(4.1)\nwhere\fdenotes the elementwise product of vectors\nand the summation indices are modulo n.\n5 Description of the method\nWe next describe the numerical method for computing\ntight DP-guarantees for continuous one dimensional\ndistributions.\n5.1 Truncation of convolutions\nWe first approximate the convolutions on a truncated\ninterval [\u0000L;L]as\n(!\u0003!)(x)\u0019ZL\n\u0000L!(t)!(x\u0000t) dt=: (!~!)(x):\nToobtainperiodicconvolutionsforthediscreteFourier\ntransform we need to periodise !. Lete!be a 2L-\nperiodic extension of !such thate!(t+n2L) =!(t)\nfor allt2[\u0000L;L)andn2Z. We further approximate\nZL\n\u0000L!(t)!(x\u0000t) dt\u0019ZL\n\u0000Le!(t)e!(x\u0000t) dt:(5.1)\n5.2 Discretisation of convolutions\nDivide the interval [\u0000L;L]onnequidistant points\nx0;:::;xn\u00001such that\nxi=\u0000L+i\u0001x;where \u0001x= 2L=n:\nConsider the vectors\n!=\"!0\n...\n!n\u00001#\nande!=\"e!0\n...\ne!n\u00001#\n;\nwhere\n!i=!(\u0000L+i\u0001x)ande!i=e!(i\u0001x):\nAssumingniseven, fromtheperiodicityitfollowsthat\ne!=D!;whereD=\u00140In=2\nIn=2 0\u0015\n:\nWe approximate (5.1) using a Riemann sum and the\nconvolution theorem (4.1) as\n(e!~e!)(i\u0001x) =ZL\n\u0000Le!(t)e!(i\u0001x\u0000t) dt\n\u0019\u0001xXn\u00001\n`=0e!`e!i\u0000`(indices modulo n)\n=\u0001x\u0002\nF\u00001\u0000\nF(e!)\fF(e!)\u0001\u0003\ni:Discretisationof k-foldtruncatedconvolutionsleadsto\nk-fold discrete convolutions and to the approximation\n(e!~ke!)(\u0000L+i\u0001x)\n\u0019(\u0001x)k\u00001\u0002\nDF\u00001\u0000\nF(e!)\fk\u0001\u0003\ni\n= (\u0001x)\u00001\u0002\nDF\u00001\u0000\nF(D!\u0001x)\fk\u0001\u0003\ni;\nwhere\fkdenoteskth elementwise power of vectors.\n5.3 Approximation of the \u000e(\")-integral\nFinally, using the discretised convolutions we approx-\nimate the integral formula for the exact \u000e-value. De-\nnote the discrete convolution vector\nCk= (\u0001x)\u00001\u0002\nDF\u00001\u0000\nF(D!\u0001x)\fk\u0001\u0003\nand the starting point of the discrete sum\n`\"= minf`2Z:\u0000L+`\u0001x>\"g:\nUsing the vector Ck=\u0002\nCk\n0::: Ck\nn\u00001\u0003T, we approx-\nimate the integral formula given in Thm. 6 as a Rie-\nmann sum:\n\u000e(\") =Z1\n\"(1\u0000e\"\u0000s)(!\u0003k!)(s) ds\n\u0019\u0001xXn\u00001\n`=`\"\u0000\n1\u0000e\"\u0000(\u0000L+`\u0001x)\u0001\nCk\n`:(5.2)\nWe call this method the Fourier Accountant (FA) and\ndescribe it in the pseudocode of Alg. 1. The compu-\ntational cost of the method is dominated by applying\nFFT and its inverse. Thus Alg. 1 has running time\ncomplexity O(nlogn). We give in Sec. 7 error esti-\nmates to determine the parameters Landnsuch that\nthe error caused by approximations is below a desired\nlevel.\n5.4 Computing \"(\u000e)using Newton’s method\nIn order to get the function \"(\u000e), we compute the in-\nverse of\u000e(\")using Newton’s method. From (5.2) it\nfollows that (see Lemma D.1 of Appendix)\n\u000e0(\") =\u0000Z1\n\"e\"\u0000s(!\u0003k!)(s) ds:(5.3)\nThus, in order to find \"such that\u000e(\") =\u0016\u000e, we apply\nNewton’s method (Stoer and Bulirsch, 2013) to the\nfunction\u000e(\")\u0000\u0016\u000ewhich gives the iteration\n\"`+1=\"`\u0000\u000e(\"`)\u0000\u0016\u000e\n\u000e0(\"`):\nEvaluating \u000e0(\")for different values of \"is cheap using\nthe formula (5.3) and an approximation analogous to\n(5.2). As is common practice, we use as a stopping\ncriterion\f\f\u000e(\"`)\u0000\u0016\u000e\f\f\u0014\u001cfor some prescribed tolerance\nparameter\u001c. The iteration was found to converge in\nall experiments with an initial value \"0= 0.\nComputing Tight Differential Privacy Guarantees Using FFT\nAlgorithm 1 Fourier Accountant algorithm\nInput: privacy loss distribution !, number of com-\npositionsk, truncation parameter L, number of dis-\ncretisation points n.\nEvaluate the discrete distribution values\n!i=!(\u0000L+i\u0001x); i= 0;:::;n\u00001;\u0001x=2L\nn:\nSet\n!=\"!0\n...\n!n\u00001#\nEvaluate\nCk= (\u0001x)\u00001\u0002\nDF\u00001\u0000\nF(D!\u0001x)\fk\u0001\u0003\n;\n`\"= minf`2Z:\u0000L+`\u0001x>\"g:\nEvaluate the approximation\n\u000e(\")\u0019\u0001xXn\u00001\n`=`\"\u0000\n1\u0000e\"\u0000(\u0000L+`\u0001x)\u0001\nCk\n`:\n5.5 Approximation for varying mechanisms\nOur approach also allows computing privacy cost of a\ncomposite mechanism M1\u000e:::\u000eMk, where the PLDs\nof the mechanisms Mivary. This is needed for ex-\nample when accounting the privacy loss of Stochastic\nGradient Langevin Dynamics iterations (Wang et al.,\n2015), where decreasing the step size increases \u001b.\nIn this case the function \u000e(\")is given by Thm. A.7 of\nAppendix by an integral formula of the form\n\u000e(\") =Z1\n\"(1\u0000e\"\u0000s)(!1\u0003:::\u0003!k)(s) ds;\nwhere!i’s are PLD distributions determined by the\nmechanismsMi,1\u0014i\u0014k.\nDenotingC= (\u0001x)\u00001\u0002\nDF\u00001\u0000\nF1\f:::\fFk\u0001\u0003\n, where\nFi=F(D!i\u0001x)and!i’s are obtained from discreti-\nsations of!i’s (as in Sec. 5.2), then \u000e(\")can be ap-\nproximated as in (5.2).\n6 Subsampled Gaussian mechanism\nThe main motivation for this work comes from privacy\naccounting of the subsampled Gaussian mechanism\nwhich gives privacy bounds for DP-SGD (Abadi et al.,\n2016b). In Appendix, we show that the worst case pri-\nvacy analysis of DP-SGD can be carried out by anal-\nysis of one dimensional probability distributions. We\nderive the privacy loss distributions for three different\nsubsampling methods: Poisson subsampling with both\n\u0018R- and\u0018S-neighbouring relations, sampling with-\nout replacement with \u0018S-neighbouring relation andsamplingwithreplacementwith \u0018S-neighbouringrela-\ntion. We note the following related works. Balle et al.\n(2018) consider RDP bounds for these three subsam-\npling methods, Wang et al. (2019) give improved RDP\nbounds for the case of sampling without replacement\nand Zhu and Wang (2019) give tight RDP bounds for\nthe case of Poisson subsampling.\n6.1 Poisson subsampling for (\";\u000e;\u0018R)-DP\nWe start with the Poisson subsampling method, where\neach member of the dataset is included in the stochas-\ntic gradient minibatch with probability q. This\nmethodisalsousedinthemomentsaccountant(Abadi\net al., 2016b), and also considered by Meiser and Mo-\nhammadi (2018) and Wang et al. (2019). As we show\nin Appendix, the (\";\u000e;\u0018R)-DP analysis of the Poisson\nsubsampling is equivalent to considering the following\none dimensional distributions:\nfX(t) =q1p\n2\u0019\u001b2e\u0000(t\u00001)2\n2\u001b2+ (1\u0000q)1p\n2\u0019\u001b2e\u0000t2\n2\u001b2;\nfY(t) =1p\n2\u0019\u001b2e\u0000t2\n2\u001b2:\nHere\u001b2denotes the variance of the additive Gaussian\nnoise. Using Definition 3, the privacy loss function is\ngiven by\nLX=Y(t) = logq1p\n2\u0019\u001b2e\u0000(t\u00001)2\n2\u001b2+(1\u0000q)1p\n2\u0019\u001b2e\u0000t2\n2\u001b2\n1p\n2\u0019\u001b2e\u0000t2\n2\u001b2\n= log\u0010\nqe2t\u00001\n2\u001b2+ (1\u0000q)\u0011\n:\nNowLX=Y(R) = (log(1\u0000q);1)andLX=Yis again a\nstrictly increasing continuously differentiable bijective\nfunction in the whole R. Straightforward calculation\nshows that\nL\u00001\nX=Y(s) =\u001b2loges\u0000(1\u0000q)\nq+1\n2:\nMoreover,\nd\ndsL\u00001\nX=Y(s) =\u001b2es\nes\u0000(1\u0000q):\nThe privacy loss distribution !X=Yis determined by\nthe density function given in Def. 4. Lemma A.9\nand its corollary explain the observation that gener-\nally\u000eX=Y>\u000eY=X.\n6.2 Sampling without replacement for\n(\";\u000e;\u0018S)-DP\nWe next consider the \u0018S-neighbouring relation and\nsampling without replacement. In this case the batch\nAntti Koskela, Joonas Jälkö, Antti Honkela\nsizemis fixed and each member of the dataset con-\ntributes at most once for each minibatch. Here q=\nm=n, wherendenotes the total number of data sam-\nples. Without loss of generality we consider here the\ndensity functions\nfX(t) =q1p\n2\u0019\u001b2e\u0000(t\u00001)2\n2\u001b2+ (1\u0000q)1p\n2\u0019\u001b2e\u0000t2\n2\u001b2;\nfY(t) =q1p\n2\u0019\u001b2e\u0000(t+1)2\n2\u001b2+ (1\u0000q)1p\n2\u0019\u001b2e\u0000t2\n2\u001b2:\nThe privacy loss function is now given by\nLX=Y(t) = log \nqe2t\u00001\n2\u001b2+ (1\u0000q)\nqe\u00002t\u00001\n2\u001b2+ (1\u0000q)!\n:\nWe see thatLX=Y(R) =Rand again thatLX=Yis a\nstrictly increasing continuously differentiable function.\nWith a straightforward calculation we find that\nL\u00001\nX=Y(s) =\u001b2log\u00101\n2c\u0000\n\u0000(1\u0000q)(1\u0000es)\n+p\n(1\u0000q)2(1\u0000es)2+ 4c2es\u0001\u0011\n;\nwherec=qe\u00001\n2\u001b2.\nUsing Lemma A.9 and the property fY(\u0000t) =fX(t),\nwe see that \u000e=\u000eY=X=\u000eX=Y.\nWe remark that in (\";\u000e;\u0018S)-DP, the Poisson subsam-\npling with the sampling parameter \ris equivalent to\nthe case of the sampling without replacement with\nq=\r, as in both cases the differing element is in-\ncluded in the minibatch with probability \r.\n6.3 Sampling with replacement\nConsider next the sampling with replacement and the\n\u0018S-neighbouring relation. Again the batch size is\nfixed, however this time each element of the minibatch\nis drawnfrom thedataset withprobabilityq. Thusthe\nnumber of contributions of each member of the dataset\nis not limited. Then `, the number of times the differ-\ning samplex0is in the batch, is binomially distributed,\ni.e.,`\u0018Binomial(1=n;m ), wheremdenotes the batch\nsize andnthe total number of data samples.\nWithoutlossofgenerality, weconsiderherethedensity\nfunctions\nfX(t) =1p\n2\u0019\u001b2mX\n`=0\u00121\nn\u0013`\u0012\n1\u00001\nn\u0013m\u0000`\u0012m\n`\u0013\ne\u0000(t\u0000`)2\n2\u001b2;\nfY(t) =1p\n2\u0019\u001b2mX\n`=0\u00121\nn\u0013`\u0012\n1\u00001\nn\u0013m\u0000`\u0012m\n`\u0013\ne\u0000(t+`)2\n2\u001b2:\nThe privacy loss function is then given by\nLX=Y(t) = log\u0012Pm\n`=0c`x`\nPm\n`=0c`x\u0000`\u0013\n;where\nc`=\u00121\nn\u0013`\u0012\n1\u00001\nn\u0013m\u0000`\u0012m\n`\u0013\ne\u0000`2\n2\u001b2; x = et\n\u001b2:\nSincec`>0for all`= 1;:::;m, clearlyPm\n`=0c`x`is\nstrictly increasing as a function of tandPm\n`=0c`x\u0000`is\nstrictly decreasing. Moreover, we see that\nPm\n`=0c`x`\nPm\n`=0c`x\u0000`!0ast!\u00001\nandPm\n`=0c`x`\nPm\n`=0c`x\u0000`!1ast!1:\nThus,LX=Y(R) =RandLX=Y(t)is a strictly increas-\ning continuously differentiable function in its domain.\nTo findL\u00001\nX=Y(s)one needs to solve LX=Y(t) =s, i.e.,\none needs to find the single positive real root of a poly-\nnomial of order 2m. As in the case of subsampling\nwithout replacement, here \u000e=\u000eY=X=\u000eX=Y.\n7 Error estimates\nWe give error estimates for the Poisson subsampling\nmethod with the neighbouring relation \u0018R. Thus, in\nthis section !denotes the PLD density function de-\nfined in Sec. 6.1. The estimates are determined by the\nparameters Landn, thetruncationintervalradiusand\nthe number of discretisation points, respectively.\nThe total error consists of (see Thm. C.1 in Appendix)\n1. The errors arising from the truncation of the con-\nvolution integrals and periodisation.\n2. The error from neglecting the tail integral\nZ1\nL(1\u0000e\"\u0000s)(!\u0003k!)(s) ds:(7.1)\n3. The numerical errors in the approximation of the\nconvolutions (!\u0003k!)and in the Riemann sum\napproximation (5.2).\nWeobtainboundsforthefirsttwosourcesoferror, i.e.,\nfor the tail integral (7.1) and the periodisation error,\nusing the Chernoff bound (Wainwright, 2019)\nP[X\u0015t] =P[e\u0015X\u0015e\u0015t]\u0014E[e\u0015X]\ne\u0015t;\nwhich holds for any random variable Xand all\u0015>0.\nDenoting also the PLD random variable by !, the mo-\nment generating function E[e\u0015!]is related to the log\nof the moment generating function of the privacy loss\nComputing Tight Differential Privacy Guarantees Using FFT\nfunctionL=LX=Yas follows. Define (see also (Abadi\net al., 2016b))\n\u000b(\u0015) := log E\nt\u0018fX(t)[e\u0015L(t)]:\nBy the change of variable s=L(t)we have\nE[e\u0015!] =Z1\n\u00001e\u0015s!(s) ds\n=Z1\nlog(1\u0000q)e\u0015sfX(L\u00001(s))dL\u00001(s)\ndsds\n=Z1\n\u00001e\u0015L(t)fX(t) dt= e\u000b(\u0015):(7.2)\nUsing existing bounds for \u000b(\u0015)given by Abadi et al.\n(2016b) and Mironov et al. (2019), we bound E[e\u0015!]\nand obtain the required tail bounds.\n7.1 Periodisation and truncation of\nconvolutions\nWe have the following bound for the error arising from\ntheperiodisationandthetruncationoftheconvolution\nintegrals. The proof is given in Appendix, Lemma C.6.\nLemma 7. Let0< q <1\n2. Let!be defined as in\nSec. 6.1, and let L\u00151. Then, for all x2R,\n\f\f\f\f\fZL\n\"(!\u0003k!\u0000e!~ke!)(x) dx\f\f\f\f\f\u0014Lk\u001be\u0000(\u001b2L+C)2\n2\u001b2\n+ e\u000b(L=2)e\u0000L2\n2+ 2X1\nn=1ek\u000b(nL)e\u00002(nL)2;\nwhereC=\u001b2log(1\n2q)\u00001\n2.\nForexample, setting \u001b,qasintheexampleofFigure2,\nandk= 2\u0001104, the first term is O(10\u000016)already for\nL= 4:0. The second term dominates the rest of the\nbound of Lemma 7 and it is much smaller than the\ntail bound (7.3) ( e\u000b(L=2)vs. ek\u000b(L=2)). Therefore,\nthis error is much smaller than estimates for the tail\nintegral (7.1) and it is neglected in the estimates.\n7.2 Convolution tail bound\nLet!denote the PLD density function. Now, the\ntail of the integral representation for \u000e(Thm. 6), with\nL>\", can be bounded as\nZ1\nL(1\u0000e\"\u0000s)(!\u0003k!)(s) ds<Z1\nL(!\u0003k!)(s) ds:\nWe consider both upper bounds and estimates for the\ntail integral of convolutions.7.2.1 Analytic tail bound\nUsing the Chernoff bound we derive an analytic bound\nfor the tail integral of convolutions. In a certain sense\nthis is equivalent to finding bounds for the RDP pa-\nrameters, since an RDP bound gives a bound also for\nthe moment generating function E[e\u0015!]needed in the\nChernoff bound. The following result is derived from\nrecent RDP results (Mironov et al., 2019). The proof\nand an illustration of the result are given in Appendix.\nTheorem 8. Supposeq\u00141\n5and\u001b\u00154. LetLbe\nchosen such that \u0015=L=2satisfies\n1<\u0015\u00141\n2\u001b2c\u00002 log\u001b;\n\u0015\u00141\n2\u001b2c\u0000log 5\u00002 log\u001b\nc+ log(q\u0015) + 1=(2\u001b2);\nwherec= log\u0010\n1 +1\nq(\u0015\u00001)\u0011\n. Then, we have\nZ1\nL(!\u0003k!)(s) ds\u0014 \n1 +2q2(L\n2+ 1)L\n2\n\u001b2!k\ne\u0000L2\n2:\nIn order to avoid the restriction on \u001bin Thm. 8, we\nconsider an approximative bound.\n7.2.2 Tail bound estimate\nWe next derive an approximative tail bound using the\n\u000b(\u0015)-bound given by Abadi et al. (2016b). Denote\nSk:=Pk\ni=1!i, where!idenotes the PLD random\nvariable of the ith mechanism. Since !i’s are indepen-\ndent,E[e\u0015Sk] =Qk\ni=1E[e\u0015!i]and the Chernoff bound\nshows thatZ1\nL(!\u0003k!)(s) ds=P[Sk\u0015L]\u0014ek\u000b(\u0015)e\u0000\u0015L\nfor any\u0015 > 0. We recall the result by Abadi et al.\n(2016b, Lemma 3) which holds for the Poisson sub-\nsampling method.\nLemma 9. Let\u001b\u00151andq <1\n16\u001b, then for any\npositive integer \u0015\u0014\u001b2ln1\nq\u001b,\n\u000b(\u0015)\u0014q2\u0015(\u0015+ 1)\n(1\u0000q)\u001b2+O(q3\u00153=\u001b3):\nSuppose the conditions of Lemma 9 hold for \u0015=L=2.\nSubstituting the bound of Lemma 9 to the Chernoff\nbound and neglecting the O(q3\u00153=\u001b3)-term gives the\napproximative upper bound\nZ1\nL(!\u0003k!)(s)ds/exp \nkq2(L\n2+ 1)L\n2\n(1\u0000q)\u001b2!\ne\u0000L2\n2:(7.3)\nFor example, when q= 0:01and\u001b= 2:0, the condi-\ntions of Lemma 9 hold for \u0015up to\u00199:5(i.e. (7.3)\nholds forLup to\u001919). Figure 2 of Appendix shows\nthe convergence of the bound (7.3) in this case.\nAntti Koskela, Joonas Jälkö, Antti Honkela\n7.3 Discretisation errors\nDerivation of discretisation error bounds can be car-\nried out using the so called Euler–Maclaurin formula\n(Sec. C.3 in Appendix). This requires bounds for\nhigher order derivatives of !. As an illustrating ex-\nample, consider the bound (recall \u0001x= 2L=n)\n\f\f\f\f\fZL\n\u0000L!(s) ds\u0000\u0001xXn\u00001\n`=0!(\u0000L+`\u0001x)\f\f\f\f\f\n\u0014\u0001x!(L) +(\u0001x)2\n12max\nt2[\u0000L;L]j!00(t)j\n\u0014\u0001x\u001be\u0000\u0000(\u001b2L+C)2\n2\u001b2 +(\u0001x)2\n12max\nt2[\u0000L;L]j!00(t)j;\nwhereC=\u001b2log(1\n2q)\u00001\n2. By Lemma D.4,\nmaxtj!00(t)jhas an upper bound O(\u001b3=q3). With\nbounds for higher order derivatives, tighter error\nbound could be obtained. In a similar fashion, bounds\nfor the errors for the approximation (5.2) could be de-\nrived. However, we resort to numerical estimates.\n7.3.1 Estimate for the discretisation error\nConsider the error arising from the Riemann sum\nIn:= \u0001xXn\u00001\n`=`\"\u0000\n1\u0000e\"\u0000(\u0000L+`\u0001x)\u0001\nCk\n`:\nAs we show in Sec. C.3 of Appendix, it holds\nEn:=ZL\n\"(1\u0000e\"\u0000s)(e!~ke!)(s) ds\u0000In\n=K\u0001x+O((\u0001x)2) =K2L\nn+O\u0010\u00102L\nn\u00112\u0011\nfor some constant Kindependent of n. Therefore,\n2(In\u0000I2n) =En+O((\u0001x)2)\nwhich leads us to use as an estimate\nerr(L;n) := 2jIn\u0000I2nj (7.4)\nfor the numerical error En.\n8 Experiments\nIn all experiments, we consider the Poisson subsam-\npling with (\";\u000e;\u0018R)-DP (Sec. 6.1).\nWe first illustrate the numerical convergence of FA for\n\u000e(\")and the estimates (7.3) and (7.4), when k= 104,\nq= 0:01,\u001b= 1:5and\"= 1:0(Tables 1 and 2). We\nemphasise that the error estimates (7.3) and (7.4) rep-\nresent the distance to the tight \u000e(\")-value. Full numer-\nical tables are given in Appendix, Sec. D.2.n FA err(L;n)\n5\u00011040.0491228786423 2:01\u000110\u00002\n2\u00011050.0496013846114 1:06\u000110\u00006\n8\u00011050.0496014103252 2:66\u000110\u000011\n3:2\u00011060.0496014103163 2:22\u000110\u000012\nTable 1: Convergence of \u000e(\")-approximation with re-\nspect ton(whenL= 12) and the estimate (7.4). The\ntail bound estimate (7.3) is O(10\u000024).\nL FA estimate (7.3)\n2:00.0422160172923 3:32\u000110\u00001\n6:00.0496014103158 3:32\u000110\u00006\n10:00.0496014103134 1:36\u000110\u000016\n12:00.0496014103163 8:30\u000110\u000024\nTable 2: Convergence of the \u000e(\")-approximation with\nrespect toL(whenn= 3:2\u0001106) and the error esti-\nmate (7.3). The estimate err(L;n) =O(10\u000012).\nWefirstcomparetheFourieraccountantmethodtothe\nprivacy accountant method included in the Tensorflow\nlibrary (Abadi et al., 2016a) which is the moments\naccountant method (Abadi et al., 2016b) (Figure 1).\nWe useq= 0:01and\u001b2f1:0;2:0;3:0g, for number\nof compositions kup to 104. We set the parameters\nL= 12andn= 5\u0001106for the approximation of the\nexactintegral. Then, for \u001b= 1:0, thetailintegralerror\nestimate (7.3) is at most O(10\u000013)and the estimate\nerr(L;n)is at most O(10\u000010). For\u001b= 2:0;3:0the\nerror estimates are smaller.\nWe next compare FA to the RDP accountant method\ndescribed by Zhu and Wang (2019) (Figure 2). Al-\nthough the RDP accountant gives tight RDP-bounds,\nthere is a small gap to the tight (\";\u000e;\u0018R)-DP.\nAs we see from Figures 1b and 2, the moments ac-\ncountant and the RDP bound (Zhu and Wang, 2019)\ndo not capture the true \"-bound for small number of\ncompositions k, whereas FA gives tight bounds.\nFigure 3 shows a comparison of FA to the Berry–\nEsseen theorem based bound given by Sommer et al.\n(2019, Thm. 6). The Berry–Esseen bound suffers from\nthe error term which converges as O(k\u00001\n2).\nLastly, we compare FA to the Privacy Buckets (PB)\nalgorithm (Meiser and Mohammadi, 2018) (Figure 4).\nThe additional ratio parameter of PB was tuned for\nthe experiments. The algorithm seems to suffer from\nsome instabilities which is also mentioned by Meiser\nand Mohammadi (2018). For larger \u001band smaller q\nPBgaveboundsclosertothatofFA,howeverthecom-\nComputing Tight Differential Privacy Guarantees Using FFT\n0 2000 4000 6000 8000 10000\nNumber of compositions k10−1210−1010−810−610−410−2100δ(/epsilon1)\nTF MA,σ=1.0\nFA,σ=1.0\nTF MA,σ=2.0\nFA,σ=2.0\nTF MA,σ=3.0\nFA,σ=3.0\n(a)\u000e(\")as a function of kfor\"= 1:0.\n100101102103104\nNumber of compositions k0.030.10.51.03.06.0/epsilon1(δ)\nTF MA,σ=1.0\nFA,σ=1.0\nTF MA,σ=2.0\nFA,σ=2.0\nTF MA,σ=3.0\nFA,σ=3.0\n(b)\"(\u000e)as a function of kfor\u000e= 10\u00006.\nFigure 1: Comparison of the Tensorflow moments ac-\ncountant and the Fourier accountant. Here q= 0:01.\n100101102103104\nNumber of compositions k10−210−1100101/epsilon1(δ)\nRDP Poisson, σ=1.0\nFA,σ=1.0\nRDP Poisson, σ=3.0\nFA,σ=3.0\nRDP Poisson, σ=5.0\nFA,σ=5.0\nFigure 2: Comparison of the RDP bound for the Pois-\nson subsampling (Zhu and Wang, 2019) and FA. Here\n\u000e= 10\u00006,q= 0:01.\npute times were always much bigger, as in experiments\nof Figure 4.\n0 1 2 3 4\n/epsilon110−1110−910−710−510−310−1δ(/epsilon1)\nBerry-Esseen, σ=2.0\nFA,σ=2.0\nBerry-Esseen, σ=4.0\nFA,σ=4.0\nBerry-Esseen, σ=6.0\nFA,σ=6.0Figure 3: Comparison of the Berry–Esseen bound and\nFA for (\";\u000e;\u0018R)-DP. Herek= 5\u0001104,q= 0:01.\n0 1 2 3 4 5 6\n/epsilon110−310−210−1δ(/epsilon1)\nnrB= 104,t=6.9 sec\nnrB= 2.5·104,t=19.2 sec\nnrB= 5·104,t=46.0 sec\nnrB= 105,t=138.6 sec\nnrB= 2·105,t=487.8 sec\nFA,t=0.37 sec\nFigure 4: Comparison of the Privacy Buckets algo-\nrithm (nr B=number of buckets) and FA. Legend con-\ntains compute times. Here k= 212,\u001b= 1:0,q= 0:02.\n9 Conclusions\nWe have presented a novel approach for computing\ntightprivacyboundsforDP.Althoughwehavefocused\non the subsampled Gaussian mechanism (with various\nsubsampling strategies), our method is applicable also\ntoothermechanisms. Weremarkthattheassumptions\nof Def. 4 would not hold for example for the Laplace\nmechanism: then the PLD distribution becomes a dis-\ncrete/continuous mixture distribution. However, us-\ning Lemma A.2 the integral formula of Thm. 6 can be\nshown to hold also in this case and the FA algorithm\ncan also be applied to this case. This is left for future\nwork. As future work, it would also be interesting to\ncarry out a full error analysis for the discretisation er-\nror. Moreover, evaluating the privacy parameters for\ncompositions involving both continuous and discrete\nvalued mechanisms is an interesting objective.\nAntti Koskela, Joonas Jälkö, Antti Honkela\nReferences\nAbadi, M., Barham, P., Chen, J., Chen, Z., Davis,\nA., Dean, J., Devin, M., Ghemawat, S., Irving, G.,\nIsard, M., et al. (2016a). Tensorflow: A system for\nlarge-scale machine learning. In 12th USENIX Sym-\nposium on Operating Systems Design and Implemen-\ntation (OSDI 16) , pages 265–283.\nAbadi, M., Chu, A., Goodfellow, I., McMahan, H. B.,\nMironov, I., Talwar, K., and Zhang, L. (2016b). Deep\nlearning with differential privacy. In Proc. CCS 2016 .\nBalle, B., Barthe, G., and Gaboardi, M. (2018). Pri-\nvacy amplification by subsampling: Tight analyses\nvia couplings and divergences. In Advances in Neural\nInformation Processing Systems , pages 6277–6287.\nBassily, R., Smith, A., and Thakurta, A. (2014).\nPrivate empirical risk minimization: Efficient algo-\nrithms and tight error bounds. In Proceedings of\nthe 2014 IEEE 55th Annual Symposium on Founda-\ntions of Computer Science , FOCS’14, pages464–473,\nWashington, DC, USA. IEEE Computer Society.\nBeimel, A., Nissim, K., and Stemmer, U. (2013).\nCharacterizing the sample complexity of private\nlearners. In Proceedings of the 4th Conference on\nInnovations in Theoretical Computer Science , ITCS\n’13, pages 97–110, New York, NY, USA. ACM.\nChaudhuri, K. and Mishra, N. (2006). When ran-\ndom sampling preserves privacy. In Dwork, C., edi-\ntor,Advances in Cryptology - CRYPTO 2006 , pages\n198–213, Berlin, Heidelberg. Springer Berlin Heidel-\nberg.\nChaudhuri, K., Monteleoni, C., and Sarwate, A. D.\n(2011). Differentiallyprivateempiricalriskminimiza-\ntion.J. Mach. Learn. Res. , 12:1069–1109.\nCooley, J. W. and Tukey, J. W. (1965). An algorithm\nfor the machine calculation of complex fourier series.\nMathematics of computation , 19(90):297–301.\nDwork, C., McSherry, F., Nissim, K., and Smith, A.\n(2006). Calibratingnoisetosensitivityinprivatedata\nanalysis. In Proc. TCC 2006 , pages265–284.Springer\nBerlin Heidelberg.\nDwork, C. and Roth, A. (2014). The algorithmic\nfoundations of differential privacy. Found. Trends\nTheor. Comput. Sci. , 9(3–4):211–407.\nDwork, C., Rothblum, G. N., and Vadhan, S. (2010).\nBoosting and differential privacy. In Proceedings of\nthe 2010 IEEE 51st Annual Symposium on Founda-\ntions of Computer Science , FOCS ’10, pages 51–60,\nWashington, DC, USA. IEEE Computer Society.Kairouz, P., Oh, S., and Viswanath, P. (2017). The\ncomposition theorem for differential privacy. IEEE\nTransactions on Information Theory , 63(6):4037–\n4049.\nMeiser, S. and Mohammadi, E. (2018). Tight on bud-\nget?: Tight bounds for r-fold approximate differential\nprivacy. In Proceedings of the 2018 ACM SIGSAC\nConference on Computer and Communications Secu-\nrity, pages 247–264. ACM.\nMironov, I.(2017). Rényidifferentialprivacy. In 2017\nIEEE 30th Computer Security Foundations Sympo-\nsium (CSF) , pages 263–275.\nMironov, I., Talwar, K., and Zhang, L. (2019). Rényi\ndifferential privacy of the sampled gaussian mecha-\nnism.arXiv preprint arXiv:1908.10530 .\nRajkumar,A.andAgarwal,S.(2012). Adifferentially\nprivatestochasticgradientdescentalgorithmformul-\ntiparty classification. In Proc. AISTATS 2012 , pages\n933–941.\nSommer, D. M., Meiser, S., and Mohammadi, E.\n(2019). Privacy loss classes: The central limit the-\norem in differential privacy. Proceedings on Privacy\nEnhancing Technologies , 2019(2):245–269.\nSong, S., Chaudhuri, K., and Sarwate, A. D. (2013).\nStochastic gradient descent with differentially private\nupdates. In Proc. GlobalSIP 2013 , pages 245–248.\nStockham Jr, T. G. (1966). High-speed convolution\nand correlation. In Proceedings of the April 26-28,\n1966, Spring joint computer conference , pages 229–\n233. ACM.\nStoer, J. and Bulirsch, R. (2013). Introduction to\nnumerical analysis , volume 12. Springer Science &\nBusiness Media.\nWainwright, M. J. (2019). High-dimensional statis-\ntics: A non-asymptotic viewpoint , volume 48. Cam-\nbridge University Press.\nWang, Y., Fienberg, S. E., and Smola, A. J. (2015).\nPrivacy for free: Posterior sampling and stochastic\ngradient Monte Carlo. In Proc. ICML 2015 , pages\n2493–2502.\nWang, Y.-X., Balle, B., and Kasiviswanathan, S.\n(2019). Subsampled Rényi differential privacy and\nanalytical moments accountant. In Proc. AISTATS\n2019.\nZhu, Y. and Wang, Y.-X. (2019). Poission subsam-\npled Rényi differential privacy. In International Con-\nference on Machine Learning , pages 7634–7642.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3ZdFb9Ycgtf",
"year": null,
"venue": "SmartCom 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=3ZdFb9Ycgtf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E^3 : Efficient Error Estimation for Fingerprint-Based Indoor Localization System",
"authors": [
"Chengwen Luo",
"Jian-qiang Li",
"Zhong Ming"
],
"abstract": "Wireless indoor localization has attracted extensive research recently due to its potential for large-scale deployment. However, the performances of different systems vary and it is difficult to compare these systems systematically in different indoor scenarios. In this work, we propose $$E^3$$ , a Gaussian process based error estimation approach for fingerprint-based wireless indoor localization systems. With an efficient error estimation algorithm, $$E^3$$ is able to efficiently estimate the localization errors of the localization systems without requiring the expensive site evaluations. Our evaluation results show that the proposed approach efficiently estimates the performance of fingerprint-based indoor localization systems and can be used as an efficient tool to tune system parameters.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WMu6vtNOUBq",
"year": null,
"venue": "HaCaT@EACL 2014",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=WMu6vtNOUBq",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Proofreading Human Translations with an E-pen",
"authors": [
"Vicent Alabau",
"Luis A. Leiva"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Gaq4GRLpFZ",
"year": null,
"venue": "MMM (2) 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Gaq4GRLpFZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E^2SGM E 2 S G M : Event Enrichment and Summarization by Graph Model",
"authors": [
"Xueliang Liu",
"Feifei Wang",
"Benoit Huet",
"Feng Wang"
],
"abstract": "In recent years, organizing social media by social event has drawn increasing attentions with the increasing amounts of rich-media content taken during an event. In this paper, we address the social event enrichment and summarization problem and propose a demonstration system $$E^2SGM$$ to summarize the event with relevant media selected from a large-scale user contributed media dataset. In the proposed method, the relevant candidate medias are first retrieved by coarse search method. Then, a graph ranking algorithm is proposed to rank media items according to their relevance to the given event. Finally, the media items with high ranking scores are structured following a chronologically ordered layout and the textual metadata are extracted to generate the tag cloud. The work is concluded in an intuitive event summarization interface to help users grasp the essence of the event.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "niKFxZDQUC9",
"year": null,
"venue": "ECAI 2016",
"pdf_link": "https://ebooks.iospress.nl/pdf/doi/10.3233/978-1-61499-672-9-1035",
"forum_link": "https://openreview.net/forum?id=niKFxZDQUC9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Online Auctions for Dynamic Assignment: Theory and Empirical Evaluation",
"authors": [
"Sujit Gujar",
"Boi Faltings"
],
"abstract": "Dynamic resource assignment is a common problem in multi-agent systems. We consider scenarios in which dynamic agents have preferences about assignments and the resources that can be assigned using online auctions. We study the trade-off between the following online auction properties: (i) truthfulness, (ii) expressiveness, (iii) efficiency, and (iv) average case performance. We theoretically and empirically compare four different online auctions: (i) Arrival Priority Serial Dictatorship, (ii) Split Dynamic VCG, (iii) e-Action, and (iv) Online Ranked Competition Auction. The latter is a novel design based on the competitive secretary problem. We show that, in addition to truthfulness and algorithmic efficiency, the degree of competition also plays an important role in selecting the best algorithm for a given context.",
"keywords": [],
"raw_extracted_content": "Online Auctions for Dynamic Assignment: Theory and\nEmpirical Evaluation\nSujit Gujar1and Boi Faltings2\nAbstract. Dynamic resource assignment is a common problem in\nmulti-agent systems. We consider scenarios in which dynamic agents\nhave preferences about assignments and the resources that can be as-signed using online auctions. We study the trade-off between the fol-lowing online auction properties: (i) truthfulness, (ii) expressiveness,(iii) efficiency, and (iv) average case performance. We theoreticallyand empirically compare four different online auctions: (i) Arrival\nPriority Serial Dictatorship, (ii) Split Dynamic VCG, (iii) e-Action,\nand (iv) Online Ranked Competition Auction. The latter is a novel\ndesign based on the competitive secretary problem. We show that,\nin addition to truthfulness and algorithmic efficiency, the degree ofcompetition also plays an important role in selecting the best algo-rithm for a given context.\n1 Introduction\nConsider a scenario in which an Uber driver prefers customers whowant to travel in a particular direction, e.g., the driver carries cus-tomers in a shared ride and hence prefers new passengers who havedestinations close to those of the passengers already in the vehicle.In such situations, the driver might be willing to pay Uber a smallamount (over the standard amount that Uber charges drivers for afare) to carry a preferable customer. In expert crowdsourcing taskassignment, expert agents have preferences about which tasks they\nwould like to work on, and they may be willing to pay the platform a\npremium for obtaining preferable tasks [10].\nAs yet another example, consider a hotel booking platform. A ho-\ntel ranked lower down on the platform might be interested in beinglisted higher for a certain class of travellers with whom the hotel be-lieves it has a higher chance of obtaining a booking. The hotel maybe willing to pay a small fee to the platform to achieve this.\nMotivated by such real-world examples, we consider dynamic as-\nsignment for crowds where the dynamic agents (Uber’s drivers, the\nexperts in the expert crowdsourcing example or the hotel owners on\nthe hotel booking website) have preferences for different available\nresources (new Uber passengers, the tasks in the expert crowdsourc-\ning example or the travellers on the hotel booking website) and assigncertain valuations to matches (which can be zero, if an agent has nopreference). Additionally, each agent has a deadline after which he\nno longer has use for the resource, known as his departure time. A\nplatform’s goal (whether Uber, an expert crowdsourcing platform or\na hotel booking website) is to improve the quality of resource assign-\nment, which is also termed as social welfare or efficiency - the sum\nof agents’ valuations for their assignments. To achieve this, the plat-\n1IIIT Hyderabad, [email protected] Most of the work in this paper was\ncarried out when the first author was a post-doctoral researcher at the EPFL.\n2EPFL, boi.faltings@epfl.chform needs agents to report their valuations truthfully. This property\nis known as incentive compatibility [18]. Additional challenges are\nthat agents are dynamic and that assignments must happen online,\ni.e., they must happen before the agents leave the system. Strategicagents may attempt to manipulate assignment mechanisms if it isbeneficial to them. Moreover, strategic agents may manipulate theirarrival-departure if it is part of their private information. There is thusa need to design appropriate game theoretic mechanisms to inducetruthful reporting of private information.\nDynamic Mechanism Design Mechanism design theory [18] is\nuseful for designing procedures to elicit agent’s valuations of the re-\nsources they are matched with. Gujar and Faltings [10] proposed thatagents should pay the platform a premium to obtain their preferred\nmatches. When such monetary transfers are feasible, one can designdynamic mechanisms for assignments in such modern marketplaces.\nIn the literature, there are two well-studied approaches for design-\ning dynamic mechanisms. The first approach, based on stochasticmodels of agents’ private information, uses dynamic programming,Markov Decision Processes, etc.; e.g., the mechanisms proposed in[1, 4]. Adapting these dynamic mechanisms for crowds is challeng-\ning as it needs precise information about the probability distribu-\ntions of agents’ valuations of their assignments and this may not be\navailable in a new marketplace. Furthermore, agents may have lit-\ntle knowledge of mechanism design theory, and understanding suchcomplex, dynamic mechanisms may be demanding.\nThe second approach does not assume any probability distribution\nfor agents’ valuations. These model-free dynamic mechanisms arecalled online auctions. The theory developed to solve the classic sec-\nretary problem is useful for designing online auctions (Babaioff et.\nal.[2]). Mechanisms based on the secretary problem are often easy to\nunderstand and implement.\n3The present paper focuses on this sec-\nond approach to online auctions for resource assignment.\nA hypothetical optimal algorithm that has knowledge about all the\nagents’ valuations in the beginning, is called offline-optimal. Online\nauction performance is evaluated using the competitive ratio (CR)\nmetric, which indicates how far a given algorithm’s solution is from\nthe offline-optimal solution in the worst case. In real life, a worst\ncase may not occur frequently. For repeated usage of an auction, the\nplatform may prefer an online auction that performs well on average,\nrather than one that only performs well in a worst case.\nThe Problem This paper’s goal is to determine which online\nauction mechanisms have the following desirable characteristics: (i)\ntruthfulness, (ii) preference expressiveness (richness of preferenceelicitation), (iii) efficiency (social welfare) and, most importantly,\n3Online auctions may be designed using completely different approaches\nfrom the secretary problem.ECAI 2016\nG.A. Kaminka et al. (Eds.)© 2016 The Authors and IOS Press.This article is published online with Open Access by IOS Press and distributed under the termsof the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).\ndoi:10.3233/978-1-61499-672-9-10351035\n(iv) good performance on average.\nOur Approach Typically, resources compete for the best possible\nassignment (agents). This is analogous to the competitive secretary\nproblem [13, 15]. However, these two papers did not address agents’\nstrategic behaviours. We adapt the techniques developed by Karlin\net. al. [15] to design a new, truthful, online auction called, an online\nranked competition auction (ORCA).\nWe hypothesize that online auctions, optimized for worst case\nguarantees, only perform well on average if there is a high level ofcompetition, i.e., the degree of competition between agents for the\nresources affects the auction’s average performance. We analyse auc-\ntions empirically by generating a large number of instances of the re-source assignment problem for different stochastic models. To eval-uate the performance of a given online auction, we introduce three\nmetrics: (i) the Empirical Competitive Ratio (ECR), (ii) the Sam-\nple Average Competitive Ratio (SACR), and (iii) the Empirical Ex-\npected Efficiency (EEE). A given online auction’s ECR is the worst\nperformance observed among the instances generated. SACR mea-\nsures how far, on average, a given online auction’s solution is froman offline-optimal solution. EEE measures the average fraction of theexpected valuations of all the agents in an offline-optimal solution,that can be achieved by a given online auction.\nWe study the resource assignment problem using the following on-\nline auctions: Arrival Priority Serial Dictatorship (APSD), proposed\nby Zou et. al.[22], the Split Dynamic VCG (SDV) [10], eAuction\n[10] and ORCA, proposed by the present paper.\nContributions We explore the application of truthful online auc-\ntions for resource assignment. We also propose a new, truthful, on-\nline, ranked competition auction (ORCA). We look for theoreticalguarantees and average case performance. This paper’s main contri-bution is its evaluation of online auctions for trade-offs between: (i)truthfulness, (ii) expressiveness, (iii) efficiency, and (iv) average caseperformance. We demonstrate empirically that the auctions designedfor better worst-case guarantees often only perform well if there is a\nhigh degree of competition between agents. In less competitive set-\ntings, simpler auctions perform better.\nOur empirical study considers APSD, SDV , eAuction and ORCA.\nAnalysis validates our hypothesis, i.e., when compared with APSD\nand SDV , eAuction and ORCA only performed well in highly com-\npetitive settings (Figure 2 and Figure 3). ORCA also performs better\nwhen the agent arrival rate is lower, with moderate competition be-\ntween agents (Figure 7). For all four online auctions, the empiricalworst cases are not as bad as indicated by corresponding theoreticalbounds (Figure 1, Figure 5) and worst cases are infrequent. We pro-\nvide guidelines on how to select the most appropriate online auction\nmechanism for a platform’s conditions.\nOrganisation In the next subsection we describe related work to\nours. Section 2 explains the notation used in this paper and the secre-tary problem. Section 3 describes the online auctions studied. Section4 formally defines the ECR, SACR and EEE, describes the experi-ments and analyses the empirical evaluation. Section 5 concludes thepaper.\n1.1 Related Work\nMechanism design theory is a rich field. Nisan et. al. [18] and the\nreferences cited therein provide pointers to it. Dynamic mechanismdesign has been addressed with regard to auction design when priordistribution of agents’ arrival-departure and valuations are known [1,4, 20]. However, we focus on a model-free design for online auctions.\nAlthough the literature on online algorithms does not addressagent’s strategic behaviour, the techniques it has developed are very\nuseful for designing online auctions. The notion of an online algo-\nrithm was popularised in a seminal paper by Karp et. al. [16]. For\nmore details on online algorithms, readers are referred to [5, 8].\nThe classic secretary problem has been well studied in the literature[7, 13, 15, 17]. Solutions to it have also been used in the design ofonline auctions, e.g., [2, 3].\nThere is abundant literature on the task assignment problem in\ncrowdsourcing [6, 11, 12, 14, 21]. However, there has not been muchresearch on the use of online algorithms/auctions for task (resource)assignment, with exceptions being [9, 10]. The present paper ad-dresses the resource assignment problem for new marketplaces using\nan online auction approach.\n2 Preliminaries\nLetR={r1,r2,...,r k}be the set of kavailable resources on a\ngiven platform. Let N={1,2,...,n}be the set of nagents in-\nterested in those resources. Each resource must be assigned to one\nagent only, and each agent is only interested in one resource.4Let\nXi∈R∪{ ⊥ } denote the resource assigned to agent i, where⊥\nindicates no assigned resource and 1Xi=rjdenotes an indicator vari-\nable which is 1 if agent iobtainsrjand is 0 otherwise. Agent igives\na valuation vijto obtain resource rj(∀i,vi⊥=0 ). Agentiarrives\nin the system at time period aiand is available until time period di.\nThe platform’s goal is to maximise the sum of the agents’ valuationsof the resources assigned to them, as described in Problem (1):\nmax/summationdisplay\nv\niXi s.t.\nXi∈R∪{⊥}∀i∈N (1)/summationdisplay\ni1Xi=rj≤1∀rj∈R\nFirst, we assume that the agents are honest in reporting their val-uations for the resources. If all the agents’ valuations are knownin advance, the platform can solve this optimization problem and\nefficiently assign resources. The hypothetical algorithm that solves\nProblem (1) in the presence of dynamic agents is called the offline-\noptimal. However, in dynamic environments, valuations only become\nknown when agents arrive in the system and they are not all available\nsimultaneously. Therefore, the platform cannot solve the above opti-misation problem. Hence, the platform must look for mechanismsthat are as close to the offline-optimal as possible. The secretaryproblem, and its analysis, is very useful for designing online auc-tions.\n2.1 The Secretary Problem\nIn the secretary problem, a recruiter wishes to hire a secretary fromamongncandidates. The recruiter can only evaluate a candidate after\ninterviewing him. However, the recruiter must either offer the job orreject the candidate before moving on to a new one. The decisionis irrevocable. This problem was analysed by [7, 17]. An optimalstrategy is for the recruiter to interview the first\nn\necandidates and\noffer the job to the next candidate who is better than these firstn\ne\ncandidates [7, 17]. Here, eis the base of the natural logarithm.\n4We focus on a time window in which each agent is typically only interested\nin one assignment.S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1036\nIn the resource assignment problem, with k=1 , the platform’s\ngoal is to assign the resource to the agent giving it the highest valua-\ntion which is known only after he arrives in the system. This resourceassignment problem is exactly the same as the secretary problem.Thus, the platform should wait until the first\nn\neagents have indicated\ntheir valuations and offer the resource to the next agent who providesa higher valuation than those first\nn\neagents.\n2.2 Competitive Secretary Problem\nConsider a case where there are k>1resources and nagents com-\npeting for them. The platform prefers to assign each resource to theagent providing the highest valuation for that resource. The agentsappear on the platform sequentially. Each agent can be offered zero,one or more resources while he is present in the system, but he canselect only one of them. After his departure, the next agent arrives\nin the system. If an agent is offered multiple resources, he chooses\nto accept a single resource and rejects the remainder. The literature\naddresses two separate cases, depending on how the agent selects aresource from among multiple offers.(i) Resources having equal rank: an agent with multiple offers has anequal probability of choosing any one of those resources.(ii) Resources having ranked order: resources are ranked, and anagent with multiple offers accepts the highest ranked resource.\nLet us assume that the platform waits until\nn\nthjagents have arrived\nin the system and reported their valuation of rj. The platform offers\nrjto the first agent to give a valuation higher than that of the first\nn\nthjagents.thjis called the stopping threshold forrj. Note that for\nthe secretary problem (k =1 ),th1=eis the optimal stopping\nthreshold. For the above two settings, the optimal stopping thresholdfor each resource should be different.\nResources Having Equal Rank Immorlica et. al. [13] addressed\nthis case. However, a closed-form solution to optimal stoppingthresholds for k>2is unknown. We leave the design of online\nauctions for such cases for future research.\nResources Having Ranked Order An agent can assign differ-\nent valuations for different resources. If he receives multiple offers,\nhe chooses the resource to which he assigned the highest valuation.\nHence, the probability of accepting a particular offer is higher for aresource that is preferred by all the agents. This induces a naturalordering of resources and the higher-ranked resource will always bechosen by an agent with multiple offers. To take advantage of thepossibility that the best possible agent arriving before the stoppingthreshold of a higher-ranked resource, the platform could use lowerstopping thresholds for lower-ranked resources [15].\nNote that the above approaches only work if agents report the plat-\nform valuations and availability truthfully. However, in real life, since\nagents are strategic, auction theory can be used instead, as explained\nin the next subsection.\n2.3 Online Auctions\nStrategic agents can boost their valuations to ensure they receive aresource. However, if they are made to pay an appropriate amountto the platform, truthful behaviour can be induced. Let p\nidenote the\npayment that agent imakes to the platform for resource assignment,\ni.e., his utility for that resource assignment is viXi−pi. Note that\nthe focus of the present paper is on the quality of assignment, hencewe refer to agents’ utility for assignments and not external utility,\nthat an Uber driver may derive by serving a passenger, for example.Another possibility for manipulation is on arrival and departure. Thusfor the agent i, the private information is: θ\ni=(vi,ai,di)where,\nvi=(vi1,vi2,...,v ik)∈Rk\n+5. This private information θiis called\nthetype of agenti. LetΘidenote the space of possible types of agent\ni. Letθ=(θ1,...,θ n)denote the type profile of all the agents. θ\nis also indicated by (θi,θ−i)whereθ−iis the type profile for all the\nagents except i.\nFor a given type profile, an online auction A=(X,p)selects a\nfeasible resource assignment X(θ)=(X1(θ),X2(θ),...,X n(θ))\nand determines the payments p(θ)=(p1(θ),p2(θ)...,pn(θ)).A\nfeasible resource assignment is one in which each agent receives a\nresource, if any, before his departure time and this is independent ofthe types of the agents who are yet to arrive.\nLetC\ni(θi)denote the space of possible misreports available to\nagentiwhen his true type is θi. That is, he may report his type\nto beˆθi∈Ci(θi)if it is beneficial to him. Generally, agents can-\nnot appear in the system before their true arrival time and cannot bepresent after their true departure time, though they can pretend toappear late or leave early or misreport their valuations. We call thisdomain of misreports no-early-arrival-no-late-departure. We restrict\nthe domain of misreports to no-early-arrival-no-late-departure. Thatis,C\ni(ex)(θi)={ˆθi=(ˆvi,ˆai,ˆdi)}withˆvi∈Rk\n+,ai≤ˆai≤\nˆdi≤di. These settings are also called exogenous arrival-departure.\nIn some cases, it may be possible to assume that agents cannot ma-\nnipulate their arrival-departure times or that these are not part of theirprivate information. We capture this setting as C\ni(en)(θi)={ˆθi=\n(ˆvi,ai,di)}. This is also called endogenous arrival-departure.\nDefinition 1 (Truthfulness) An online auction, A, is dominant\nstrategy-incentive compatible (or truthful) for domain of misreportsC\ni(θi)s if for every agent iand for every θi∈Θi,\nviXi(θ)−pi(θ)≥viXi(θ/prime\ni,θ−i)−pi(θ/prime\ni,θ−i) (2)\n∀θ/prime\ni∈Ci(θi),∀θ−i∈Θ−i\nThe next subsection presents a necessary condition for a truthful\nonline auction.\n2.4 Necessary Condition for a Truthful Online\nAuction\nIn online auctions, the dynamics of the arrival-departure of differ-ent agents offers strategic agents more flexibility for manipulation.Hence, online auctions must be designed carefully.\nDefinition 2 (Arrival-Departure Priority) Online auctions for re-\nsource assignment problems are said to have an arrival-departure\npriority if agent i’s utility at a type θ\nihaving the same valuation as\nθ/prime\ni, but with either earlier arrival or later departure than θ/prime\ni, does not\ndecrease. That is, ∀θi,θ/prime\ni∈Θisuch that vi=v/prime\ni,ai≤a/prime\niand\ndi≥d/prime\ni,\nviXi(θi,θ−i)−pi(θi,θ−i)≥viXi(θ/prime\ni,θ−i)−pi(θ/prime\ni,θ−i)\nLemma 1 In exogenous settings, a truthful online auction must have\nan arrival-departure priority.\n5Recall,ai,diare his arrival and departure times.S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1037\nProof: Suppose a truthful online auction Adoes not have an\narrival-departure priority, i.e., for an agent, say i, there exist two\ntypes ofθi:θ/prime\nisuch that vi=vi/prime, andai≤a/prime\ni,anddi≥d/primeiand\nui(θ/prime\ni,θ−i)>ui(θi,θ−i). If agent ihas a true type θiand other\nagents have type θ−i, agentibenefits from arriving late at a/primeiand\nfrom reporting his type to be θ/prime\ni, contradicting the truthfulness of A.\n/square\nThe above lemma implies that, to design a truthful online auction\nfor no-early-arrival-no-late-departure domains, one has to ensure that\nthe auction satisfies the arrival-departure priority.\nWe now define the competitive ratio (CR) metric. Let VA(θ)de-\nnote the total valuation by all the agents for the resources in an onlineauctionA, and let V\n∗(θ)be the total valuation by all the agents in\nthe offline-optimal solution. A CR of Ais defined as:\nDefinition 3 (CR) An online auction Ais said to be α-competitive\nif\nmin{θ:V∗(θ)/negationslash=0}EVA(θ)\nV∗(θ)≥1\nα\nwhere expectation is taken with respect to random orderings of theagents.\nA CR is a fair measure with which to evaluate different online auc-\ntions as online auctions are independent of stochastic models. A lowCR is desirable because even in the worst case, the given online auc-tion is close to the offline-optimal. In general, CRs are quite high as\nonline auctions can perform poorly in worst cases.\nHaving provided background information on online auctions, the\nnext section describes the auctions studied.\n3 Online Auctions for Resource Assignment\nWe consider the following online auctions: (i) APSD, (ii) SDV , (iii)\neAuction and (iv) ORCA. eAuction and ORCA are based on the sec-retary problem.\n3.1 Arrival Priority Serial Dictatorship (APSD)\nZou et. al. [22] proposed arrival priority serial dictatorship (APSD)\nfor assignment problems. In APSD, upon arrival, each agent se-lects the resource for which he has the highest valuation from thepool of available resources, but does not pay the platform. The au-thors proved that APSD is the only truthful mechanism for no-early-\narrival-no-late-departure domains if monetary transfers are not al-\nlowed.\nNote that as payments are absent in APSD, it is not an auction as\nwe imagine a real-world auction. However, it is a very simple, yettruthful mechanism without asking the agents to report anything.\n3.2 Split Dynamic VCG (SDV)\nGujar and Faltings [10] proposed a Vickrey-Clarke-Groves (VCG)-\nbased mechanism for resource assignment in crowdsourcing. VCGmechanism for static settings (i.e., ∀i, a\ni=di=1 ) is as follows. It\nfinds an assignment that maximizes the sum of the agents’ valuationsfor the resources and the payments are based on the externalities they\nimpose on the system [18]. In [10], the authors considered the par-\ntition of the agents such that the agents in each part of the partitionare available simultaneously. The platform assigns the remaining re-sources to the agents by solving VCG mechanism for each part of thepartition. SDV mechanism does not satisfy arrival-departure priorityand hence, it is only truthful for endogenous settings.3.3 eAuction\nIn [19], Parkes proposes an online auction for a single item using asolution to the secretary problem. Gujar and Faltings [10] adaptedthis for a kresources setting. The platform waits until\nn\neagents have\narrived and if the agent providing the highest valuation for any re-source is available, he gets the resource by paying the second-highestreported valuation. Otherwise, for each resource, the highest valua-tion received in the first phase is set as a reserve price and whicheveragent provides a higher valuation than that reserve price obtains the\nresource by paying the reserve price. If an agent is eligible for more\nthan one resource (i.e., having provided valuations higher than the\nreserve prices), the platform assigns the agent the resource with the\nhighest utility to him. This is referred to as an eAuction, and eAuc-\ntions are truthful for no-early-arrival-no-late-departure domains.\nIn the competitive secretary problem - when there are multiple re-\nsources - optimal stopping thresholds for different resources are dif-ferent [13, 15]. The next subsection proposes a threshold-based on-line auction framework which enables the use of different thresholdsfor different resources using the online ranked competition auction.\n3.4 Online Ranked Competition Auction (ORCA)\nHere we propose a generic threshold-based online auction frame-\nwork forkresources.\nDefinition 4 (Threshold-Based Online Auction) Letth1,...,th k\nbe the stopping thresholds for r1,...rk, respectively. Let hjandshj\nbe the highest bid and second-highest bid for rjfrom the firstn\nthj\nbids. Agent ihas a priority higher than agent jifai<aj(ties are\nresolved randomly).\nAt each time slot, an agent ilocks inrjif: (i) resource rjis unas-\nsigned, (ii) it has not been locked in by another agent with a higherpriority, and (iii) d\ni≥an\nthj. If an agent with a lower priority has\nalready locked it in, the lower priority agent loses that lock in on rj.\nAtdi,iis assigned the resource giving him the highest utility from\namong the resources he has locked in. If agent ireceives resource rj,\nthen he pays the platform shjifai≤an\nthj, otherwise he pays the\nplatformhj. All the other resources locked in for iare released at di.\nProposition 1 A threshold-based online auction satisfies arrival-\ndeparture priority.\nProof: Consider an agent iwith two types, θiandθ/prime\ni, such that\nvi=v/prime\ni,ai≤a/prime\nianddi≥d/primei. Let us fix other agents’ types as θ−i.\nWhen agent ihas type θi, he can lock in all the resources that he\ncan lock in with type θ/prime\ni, but additionally, he may be able to lock in\nmore resources in θieither because he now has a higher priority or\nbecause some resources were released after d/prime\nito which he may now\nget access. Agent iis offered the resource yielding him the highest\nutility and hence, ui(θi,θ−i)≥ui(θ/prime\ni,θ−i). As this is true for all\nagents, the proposition follows. /square\nTheorem 1 A threshold-based online auction is truthful for a no-\nearly-arrival-no-late-departure misreports domain.\nProof: The agent’s payment is independent of his bid for a resource,\nhence no agent has an incentive to lie about his valuation for that\nresource. However, in dynamic settings, an agent may try to manip-\nulate the online auction in order to get the resource with the highestutility to him. From Proposition 1, a threshold-based online auctionS.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1038\nsatisfies the arrival-departure priority. With this property and the fact\nthat the threshold-based online auction offers the agent the resourcethat has the highest utility to him throughout his availability, noagent has any incentive to misreport his type. /square\nAs explained earlier, there are two approaches to determining stop-\nping thresholds for each resource in the classic competitive secretary\nproblem. We focus on the case where resources are ordered.\nLetr\njbe thejthranked resource. Let Prj(l)be the probability\nthatrjcannot be matched with the best possible agent when lis used\nas the stopping threshold. Then Karlin et. al. [15] showed:\nTheorem 1 [15] Optimal stopping threshold thjforjthranked re-\nsource (r j) is given by\nthj= min{l:Prj(l)≥1−l\nn}−1\nOnline Ranked Competition Auction (ORCA) Karlin et. al. pro-\nvided a dynamic programme with which to compute the above\nthresholds. We use the solution to this programme and plug thesethresholds into a threshold-based online auction referred to as an On-\nline Ranked Competition Auction (ORCA).\n3.5 Comparing APSD, SDV , eAuction and ORCA\nThe complexity of the implementing mechanism increases as wemove from APSD to ORCA. However, each system is designed toachieve better CR and to provide more information to the auction.Table 1 summarizes\n6the theoretical properties of the online auctions\ndiscussed.\nAPSD SDV eAuction ORCA\nPreference No Onlyvi’svi’s and vi’s,ai’s\nelicitation ai’s anddi’s\nTruthfulness Exogenous Endogenous Exogenous Exogenous\nCR n n e2<e2\nTable 1. Comparison of the theoretical properties of APSD, SDV ,\neAuction and ORCA\nThis section concludes by illustrating how all the mechanisms\nwork using the following example.\nHHHHi=1 2 3 4 5 6 7 8 9 10\nai 1 2 3 4 4 5 5 6 6 6\ndi 2 2 3 6 7 6 5 8 9 7\nvi1 6 8 7 15 12 16 4 17 2 5\nvi2 5 3 4 1 4 2 3 2 1 4\nTable 2. Example: n=1 0,k=2\nExample: Consider a market with k=2 resources and n=1 0\ncompeting agents. Resource 1 is preferable to resource 2, i.e., allagents are more likely to value resource 1 more than resource 2. Eachagent’s arrival time, departure time and valuations are given in Table2. The mechanisms described above yield the following outcomes:\n6CRs for APSD, SDV , eAuction are taken from [10]. We believe the CR for\nORCA should be better than eAuction.•OFFLINE-OPTIMAL: Agent 8 gets resource 1 and agent 1 getsresource 2, with optimal social welfare =2 2 .\n•APSD: Agent 1 gets resource 1 and agent 2 gets resource 2, withsocial welfare =9 .\n•SDV: SDV executes VCG at t=2 . Agent 2 gets resource 1 and\nagent 1 gets resource 2, and their payments are 1and0, respec-\ntively, while social welfare =1 3 .\n•e-Auction: e-Auction waits for ⌊\n10\ne⌋=3 agents to submit their\nbids. Thus, e-Auction sets reserve prices for resource 1 at 8and\nfor resource 2 at 5. Agent 4 gets resource 1, however no agent gets\nresource 2, leading to social welfare =1 5 .\n•ORCA: ORCA waits for ⌊10\ne⌋=3 agents to submit their bids to\nfor resource 1 and for 2bids to arrive for resource 2. Thus, agent\n4obtains resource 1 by paying 8and agent 1obtains resource 2\nby paying 3. Thus ORCA achieves social welfare of 20in this\ninstance.\nIn the next section, we empirically evaluate all the online auctions\ndescribed above for average performance using different stochasticmodels for θ’s.\n4 Evaluating Online Auctions\nThis paper’s goal is to study online auctions empirically to evaluatehow they perform in practice using various stochastic models. To dothis, we define performance measures for evaluating a given onlineauction.\n4.1 Performance Measures for Online Auctions\nAlthough CR captures an online auction’s worst case performance,we believe that worst cases performances may not be recurrent inpractice. To evaluate a given online auction, we generated Ndiffer-\nent instances of θ’s according to a fixed stochastic model for θ. Letf\ni\nbe the probability density function (pdf) for θi, letSdenote the set of\nNsamples generated with f1,f2,...,f nand letf=f1×...×fn\ndenote the joint probability distribution function. New measures arenow defined as follows:\nDefinition 5 (ECR) Online auction Ais said to have an empirical\nβ\nN\nfcompetitive ratio (ECR) if\nmin{θ∈S:V∗(θ)/negationslash=0}/braceleftbigg\nmeanθ|vis fixedVA(θ)\nV∗(θ)/bracerightbigg\n≥1\nβN\nf\nThe ECR measures how far online auction A’s solution is away from\nthe offline-optimal using generated samples. Even for large N,i ft h e\nECR is good, then worst cases are rare.Definition 6 (SACR) Online auction Ahas a sample average com-\npetitive ratio (SACR) γ\nN\nfif\nmean {θ∈S:V∗(θ)/negationslash=0}/braceleftbigg\nmeanθ|vis fixedVA(θ)\nV∗(θ)/bracerightbigg\n≥1\nγN\nf\nAn SACR measures, via an analysis of average cases, how far on-\nline auction A’s solution is away from the offline-optimal, where the\naverage is taken from generated samples. Even for large N, if the\nSACR is low, then, on average, the auction performs better than onewith a higher SACR.S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1039\nDefinition 7 (EEE) Online auction Ais said to have an empirical\nexpected efficiency (EEE) as ΔN\nf, where\nΔN\nf=mean θ∈SVA(θ)\nmean θ∈SV∗(θ)\nThe EEE captures the average fraction of expected offline-optimal\nsocial welfare achieved by online auction A. The closer EEE is to 1,\nthe closer, on average, Ais to the offline-optimal.\nThe next section describes this empirical analysis.\n4.2 Experiments\nFor a fixed number of resources (k ), the parameters that can vary are\nthe size of the agent pool (n ), the agent arrival rate (λ ), agent wait-\ning time and agents’ preferences vij’s. First, we explain the different\nmodels of agents’ preferences considered in these experiments.\n4.2.1 Preference Models\nThe following agents’ preference models were considered:\nLow Competition\n•Preference Model 1 (PM1): each agent’s valuation for each re-\nsource is an independent and identically distributed (i.i.d.) randomvariable with a uniform distribution on [0,1].\n•Preference Model 2 (PM2): each agent’s valuation for each re-source is an i.i.d. random variable with a triangular distribution on\n[0,1] with a peak at 0.5.\nHigh Competition\n•Preference Model 3 (PM3): each agent has the same valuation for\nevery resource and these valuations have a uniform distribution on[0,1].\n•Preference Model 4 (PM4): Resources are ranked. Any agent’svaluation for resource r\njis uniformly drawn from [k−j\nk,k−j+1\nk].\nIn the first two PMs, there is relatively less competition betweenagents for each resource. The latter two PMs induce higher com-petition between agents for resources.\nThe next subsection explains the study’s different experimental se-\ntups.\n4.2.2 Experimental Setups\nThe four following experimental variations were analysed:\nExperiment 1 (Effect of non ECR, SACR and EEE for fixed k):\nThis experiment fixed the number of tasks k=5 ,λ=0.5and varied\nn=8→20.\nExperiment 2 (Effect of kon ECR, SACR and EEE for fixed n):\nThis experiment fixed the number of agents n=2 0 ,λ=0.5and\nvariesk=2→20.\nExperiment 3 (Effect of λon ECR, SACR and EEE for fixed n,k ):\nThis experiment varied the agent arrive rate (λ) on the platform fork=5,n=2 0 and the waiting period was exponentially distributed\nwith mean μ=0.5.\nExperiment 4 (Effect of λon ECR, SACR and EEE for fixed n,k ):\nThis experiment is the same as Experiment 3 except that the agentsare impatient.\nFor each of the four PMs, we generated 8,000 valuation profiles\nfor each of the four experiments described above. For each valuationprofile, 120 random agent orderings were considered. First, the sam-ple averages of the total valuation achieved by each auction mech-anism for these 120 orderings were calculated. Second, ECR andSACR were measured for the 8,000 sample valuation profiles. Also,\nthe sample averages of the total valuation of each auction mechanism\nand the offline-optimals were calculated over 8,000 x 120 instances,in order to measure EEE. As ECR and SACR are >1, and indeed\nmay take much larger values, we plotted\n1\nECRand1\nSACRto view\nthem in[0,1].\nThese experiments used k∈[2,20] andn∈[5,50], as we believe\nthat typical online auctions for resource assignment in new market-places will be of a similar size. For example, although there may be\na large number of Uber drivers and passengers at the same time, a\ndriver may only be interested in a couple of customers and there maynot be many drivers nearby interested in every single customer. If k\nandnare scaled proportionately, we still believe that similar results\nwill hold true.\n4.2.3 Experimental Results\nThe following observations were common to all the experiments.\n•O1: Correlations between PM1-PM2 and PM3-PM4. Across all\nfour experiments, the three metrics for PM2 demonstrated the\nsame trend as under PM1, but at different scales. There was a\nsimilar correlation between PM3 and PM4. This is attributed tothe fact that PM1 and PM2 encourage little competition, and PM3and PM4 encourage high competition. Hence, below, we illustrateour results w.r.t. PMs 1 and 3 only.\n•O2: Correlation Across ECR, SACR, EEE. In general, the graphs\nfor ECR, SACR and EEE showed similar trends, though scalesand rates of change could differ. (Figure 1, Figure 2 and Figure 3).\n•O3: ORCA under High Competition. In general, ORCA performed\nbetter when preferences induced more competition for resources\n(i.e., in PM3, PM4).\n•O4: CR vs ECR. Worst case competitiveness across the gener-\nated samples (ECR) was much better than the theoretical CR, thus\nworst cases did not occur frequently. For example, CR across allthe auctions, considering all the settings, is always >7.38 and as\nbad as50in some cases. However, empirically all the auctions in\nour experiments were better than 5-competitive (1/ECR> 0.2).\nSome of the specific observations were as follows.\n•Experiment 1: Figures 1 to 3 show how ECR, SACR and EEE\nchange w.r.t. nfor the different auction mechanisms when k=5\nandλ=0.5for agents following PM1. In Experiment 1 and with\nall PMs, ORCA performed better for a larger agent pool on mea-sures ECR and SACR. However, for EEE, SDV was the best auc-tion mechanism across all PMs. These experiments clearly showthat as competition increases further, ORCA should outperformSDV in all the PMs. Figure 4 illustrates ORCA ’s superiority underPM3. Because of the correlations in O2, we drop other plots usingPM3 to save space.\n•Experiment 2: Figure 5 illustrates how ECR varies w.r.t. the num-\nber of resources, kwhenn=2 0,λ=0.5under PM1. Similar be-\nhaviour was observed under PM3, where ORCA was superior tothe other mechanisms until k=8 . As competition reduces, (i.e.,\nkincreases) SDV and APSD perform better (because of O1 and\nO2, not all measures under all PMs are displayed).\n•Experiment 3: The arrival rate’s effect on auction mechanism per-\nformance was also studied (Figures 6 and 7). Experiments demon-\nstrated that for λ≤1, threshold-based online auctions performedS.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1040\nFigure 1. Experiment 1: ECR vs nfork=5,λ=0.5, PM1\n Figure 2. Experiment 1: SACR vs nfork=5,λ=0.5, PM1\nFigure 3. Experiment 1: EEE vs nfork=5,λ=0.5, PM1\n Figure 4. Experiment 1: ECR vs nfork=5,λ=0.5, PM3\nbetter than APSD and SDV as measured by SACR under PM3.\nHowever, for higher λ, i.e., when agents arrive in large numbers\nat every time slot, SDV performed better, especially as measured\nusing EEE or ECR. APSD performed better than threshold-basedonline auctions, but 5%-10% below SDV .\n•Experiment 4: In Experiment 4, all the auctions showed a similar\nperformance trend to Experiment 3. The performances of APSDand e-Auction are not supposed to change significantly in the pres-ence of impatient agents. Experiments also showed that ORCA\ndid not change much in the presence of impatient agents. SDV\nperformed slightly less well in Experiment 4, but was still supe-\nrior at higher λ, without changing any of our conclusions. Hence\nwe do not display plots for Experiment 4.\n4.3 Discussion\nBased on these experiments, we consider two broad settings:\n(1) Low competition for resources and/or a high arrival rate.(2) High competition for resources and/or a low arrival rate.\nLow Competition If many agents log in simultaneously (high λ),\nSDV is superior to all the other online auctions (Figures 6 and 7).As the number of resources increases, the performances of SDV andAPSD improve and they become superior to threshold-based auc-tions (Experiment 2).\nFigure 5. Experiment 2: ECR vs kforn=2 0,λ=0.5, PM1S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1041\nFigure 6. Experiment 3: ECR vs λfork=5,n=2 0 , PM3\n Figure 7. Experiment 3: EEE vs λfork=5,n=2 0 , PM3\nNote that the empirical superiority of SDV might be attributable to\nthe fact that it tries to match many agents simultaneously, leading to\nmore efficient assignment. Threshold-based online auctions typicallydrop a certain fraction of agents in order to learn which asymptoti-cally improves worst case guarantees. Hence, they perform best onlyin highly competitive settings, as explained below.\nHigh Competition If there is strong competition between agents\n(that is either large nfor fixedk,o r\nk\nn<0.1), threshold-based online\nauctions (especially ORCA) perform better (Figures 1 to 5) for alltypes of PMs.\nOverall, these experiments showed that ORCA outperforms the\nother auctions when (i) agents arrive sequentially (very low λ) and\nare impatient, and (ii) preferences models are of the PM3 or PM4type.\nFrom Table 1’s ranking of auction mechanism CRs, APSD ≺SDV\n≺e-Auction ≺ORCA. However, the experiments presented here\nrank the four auction mechanisms, by measure, as shown in Table 3.\nRecommendations If there are few resources and a large pool of\nagents, the auction platform should choose threshold-based onlineauctions. If the platform expects that (i) all agents put the same valu-ation on each different resource or, (ii) some resources are preferredover others, then the platform can implement ORCA. If the agents’valuations of resources are independent of each other, then the plat-form can use eAuction.\nIf there are large numbers of resources or large numbers of agents\nlogging in to the system at every time period, then the platform can\nuse SDV . However, SDV can be manipulated for arrival-departure. If\nthe platform prefers not to charge the agents for resource assignmentand/or prefers to work using no-early-arrival-no-late-departure do-\nmains, then it can use APSD. APSD is simple to implement but has\na cost of a 5%-10% loss in performance compared to SDV . However,\nin many settings and preference models, it is better than threshold-\nbased online auctions.\n5 Summary\nThis paper addressed the resource assignment problem for dynamic\nagents and proposed a new Online Ranked Competition Auction(ORCA) mechanism to deal with this. We hypothesised that the auc-tions targeted for worst case guarantees perform better in practiceAPSD SDV eAuction ORCA\nCR 4 3 2 1\nLow Competition and High λ\nECR 2 1 4 3\nSACR 2 1 4 3\nEEE 2 1 4 3\nHigh Competition and Low λ\nECR 3 2 4 1\nSACR 3 2 4 1\nEEE 3 1 4 2\nTable 3. Comparison of APSD, SDV , eAuction and ORCA: relative\nrankings using CR, ECR, SACR and EEE from empirical analysis\nonly when there is strong competition for resources between agents,i.e., the degree of competition between agents plays an importantrole in the trade-off between properties such as truthfulness, expres-siveness, efficiency and average case performance. Our experimentsvalidated this hypothesis.\nWe studied the application of four different online auctions to the\nresource assignment problem, namely APSD, SDV , eAuction andORCA. We compared their theoretical properties (Table 1). Instead\nof relying exclusively on the competitive ratio to evaluate average\ncase online auctions, we proposed three new measures, namely ECR,\nSACR and EEE. Furthermore, experimental worst cases generated\nfrom samples were much better than theoretical worst cases (Table3). In the last section, we provided suggestions as to how a platformshould choose its online auction mechanism based on the size of theagent pool, the size of the resource pool and how frequently agentslog in to the system.\nThe ORCA and eAuction mechanisms were only observed to give\nbetter average-case performances in specific preference models. Oth-\nerwise, overall, SDV is a very good online auction mechanism when\ncompared to the others studied in this paper. Future research mightattempt to design a better model-free resource assignment mech-anism (online auction), one that is more efficient than SDV andis truthful in no-early-arrival-no-late-departure domains for a broadclass of preference models.S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1042\nREFERENCES\n[1] Susan Athey and Ilya Segal. An efficient dynamic mechanism. Econo-\nmetrica, 81(6):2463–2485, 2013.\n[2] Moshe Babaioff, Nicole Immorlica, David Kempe, and Robert Klein-\nberg. Online auctions and generalized secretary problems. ACM SIGe-\ncom Exchanges, 7(2):7, 2008.\n[3] Moshe Babaioff, Nicole Immorlica, and Robert Kleinberg. Matroids,\nsecretary problems, and online mechanisms. In Proceedings of the eigh-\nteenth annual ACM-SIAM symposium on Discrete algorithms, pages\n434–443. Society for Industrial and Applied Mathematics, 2007.\n[4] Dirk Bergemann and Juuso V ¨alim ¨aki. The dynamic pivot mechanism.\nEconometrica, 78(2):771–789, 2010.\n[5] Niv Buchbinder and Joseph Naor. The design of competitive online\nalgorithms via a primal: dual approach. F oundations and Trends R/circlecopyrtin\nTheoretical Computer Science, 3(2–3):93–263, 2009.\n[6] Xi Chen, Qihang Lin, and Dengyong Zhou. Optimistic knowledge gra-\ndient policy for optimal budget allocation in crowdsourcing. In Pro-\nceedings of the 30th International Conference on Machine Learning(ICML-13), pages 64–72, 2013.\n[7] Eugene B Dynkin. The optimum choice of the instant for stopping a\nmarkov process. In Soviet Math. Dokl, volume 4, 1963.\n[8] Amos Fiat. Online algorithms: The state of the art (lecture notes in\ncomputer science). 1998.\n[9] Gagan Goel, Afshin Nikzad, and Adish Singla. Allocating tasks to\nworkers with matching constraints: truthful mechanisms for crowd-\nsourcing markets. In Proceedings of the companion publication of the\n23rd international conference on World wide web companion, pages279–280, 2014.\n[10] Sujit Guajr and Boi Faltings. Auction based mechanisms for dynamic\ntask assignments in expert crowdsourcing. In Proceedings of the Inter-\nnational workshop on Agent Mediated E-Commerce and Trading Agent\nDesign and Analysis (AMEC/TADA’15), 2015.\n[11] Chien-Ju Ho, Shahin Jabbari, and Jennifer W V aughan. Adaptive task\nassignment for crowdsourced classification. In Proceedings of the 30th\nInternational Conference on Machine Learning (ICML-13), pages 534–\n542, 2013.\n[12] Chien-Ju Ho and Jennifer Wortman V aughan. Online task assignment\nin crowdsourcing markets. In AAAI, 2012.\n[13] Nicole Immorlica, Robert Kleinberg, and Mohammad Mahdian. Sec-\nretary problems with competing employers. In Internet and Network\nEconomics, pages 389–400. Springer, 2006.\n[14] David R Karger, Sewoong Oh, and Devavrat Shah. Budget-optimal\ncrowdsourcing using low-rank matrix approximations. In Communi-\ncation, Control, and Computing (Allerton), 2011 49th Annual Allerton\nConference on, pages 284–291. IEEE, 2011.\n[15] Anna Karlin and Eric Lei. On a competitive secretary problem. In\nAAAI, pages 944–950, 2015.\n[16] R. M. Karp, U. V . V azirani, and V . V . V azirani. An optimal algorithm\nfor on-line bipartite matching. In Proceedings of the Twenty-second\nAnnual ACM Symposium on Theory of Computing, STOC ’90, pages\n352–358, New Y ork, NY , USA, 1990. ACM.\n[17] Denis V Lindley. Dynamic programming and decision theory. Applied\nStatistics, pages 39–51, 1961.\n[18] Noam Nisan. Introduction to mechanism design. In Noam Nisan,\nTim Roughgarden, Eva Tardos, and Vijay V azirani, editors, Algorith-\nmic game theory, pages 209–242. Cambridge University Press, 2007.\n[19] David Parkes. Online mechanisms. In Noam Nisan, Tim Roughgar-\nden, Eva Tardos, and Vijay V azirani, editors, Algorithmic game theory,\npages 411–439. Cambridge University Press, 2007.\n[20] David C. Parkes, Ruggiero Cavallo, Florin Constantin, and Satinder\nSingh. Dynamic Incentive Mechanisms. Artificial Intelligence Mag-\nazine, 31:79–94, 2010.\n[21] Long Tran-Thanh, Sebastian Stein, Alex Rogers, and Nicholas R Jen-\nnings. Efficient crowdsourcing of unknown experts using multi-armedbandits. In European Conference on Artificial Intelligence, pages 768–\n773, 2012.\n[22] James Y Zou, Sujit Gujar, and David C Parkes. Tolerable manipulabil-\nity in dynamic assignment without money. In AAAI, 2010.S.Gujar andB.Faltings /Online Auctions forDynamic Assignment: Theory andEmpirical Evaluation 1043",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "l50QMlxlDwc",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=l50QMlxlDwc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Multimodal Advertising Generation Dataset",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VJyM0nAqMJ1",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/524/515",
"forum_link": "https://openreview.net/forum?id=VJyM0nAqMJ1",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Alonzo Church Award 2018 - Call for Nominations",
"authors": [
"Thomas Eiter",
"Javier Esparza",
"Catuscia Palamidessi",
"Gordon D. Plotkin",
"Natarajan Shankar"
],
"abstract": "Alonzo Church Award 2018 - Call for Nominations",
"keywords": [],
"raw_extracted_content": "Alonzo Church Award 2018\nCall for Nominations\nDeadline : March 1, 2018.\nIntroduction:\nAn annual award, called the \"Alonzo Church Award for Outstanding Contri-\nbutions to Logic and Computation\" was established in 2015 by the ACM Special\nInterest Group for Logic and Computation (SIGLOG), the European Association\nfor Theoretical Computer Science (EATCS), the European Association for Com-\nputer Science Logic (EACSL), and the Kurt Gödel Society (KGS). The award\nis for an outstanding contribution represented by a paper or by a small group of\npapers published within the past 25 years. This time span allows the lasting im-\npact and depth of the contribution to have been established. The award can be\ngiven to an individual, or to a group of individuals who have collaborated on the\nresearch. For the rules governing this award, see: http: //siglog.org /awards /alonzo-\nchurch-award. The 2017 Alonzo Church Award was given jointly to Samson\nAbramsky, Radha Jagadeesan, Pasquale Malacaria, Martin Hyland, Luke Ong,\nand Hanno Nickau for providing a fully-abstract semantics for higher-order com-\nputation through the introduction of game models, see: http: //siglog.org /winners-\nof-the-2017-alonzo-church-award\nEligibility and Nominations:\nThe contribution must have appeared in a paper or papers published within the\npast 25 years. Thus, for the 2018 award, the cut-o \u000bdate is January 1, 1993. When\na paper has appeared in a conference and then in a journal, the date of the journal\npublication will determine the cut-o \u000bdate. In addition, the contribution must not\nyet have received recognition via a major award, such as the Turing Award, the\nKanellakis Award, or the Gödel Prize. (The nominee(s) may have received such\nawards for other contributions.) While the contribution can consist of conference\nor journal papers, journal papers will be given a preference.\nNominations for the 2018 award are now being solicited. The nominating\nletter must summarise the contribution and make the case that it is fundamen-\ntal and outstanding. The nominating letter can have multiple co-signers. Self-\nnominations are excluded. Nominations must include: a proposed citation (up\nto 25 words); a succinct (100-250 words) description of the contribution; and a\ndetailed statement (not exceeding four pages) to justify the nomination. Nom-\ninations may also be accompanied by supporting letters and other evidence of\nworthiness.\nNominations are due by March 1, 2018, and should be submitted to\[email protected]\nPresentation of the Award:\nThe 2018 award will be presented at ICALP 2018, the International Collo-\nquium on Automata, Languages and Programming. The award will be accompa-\nnied by an invited lecture by the award winner, or by one of the award winners.\nThe awardee(s) will receive a certificate and a cash prize of USD 2,000. If there\nare multiple awardees, this amount will be shared.\nAward Committee:\nThe 2018 Alonzo Church Award Committee consists of the following five\nmembers:\n\u000fThomas Eiter\n\u000fJavier Esparza\n\u000fCatuscia Palamidessi (chair)\n\u000fGordon Plotkin\n\u000fNatarajan Shankar",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rdpyfEibZb",
"year": null,
"venue": "Bull. EATCS 2016",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/419/399",
"forum_link": "https://openreview.net/forum?id=rdpyfEibZb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "EATCS Fellows' Advice to the Young Theoretical Computer Scientist",
"authors": [
"Luca Aceto",
"Mariangiola Dezani-Ciancaglini",
"Yuri Gurevich",
"David Harel",
"Monika Henzinger",
"Giuseppe F. Italiano",
"Scott A. Smolka",
"Paul G. Spirakis",
"Wolfgang Thomas"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "EATCS F ellows ’ Advice to the Young\nTheoretical Computer Scientist\nLuca Aceto (Reykjavik University)\nwith contributions by Mariangiola Dezani-Ciancaglini,\nYuri Gurevich, David Harel, Monika Henzinger,\nGiuseppe F. Italiano, Scott Smolka,\nPaul G. Spirakis and Wolfgang Thomas\nI have always enjoyed reading articles, interviews, blog posts and books in\nwhich top-class scientists share their experience with, and provide advice to,\nyoung researchers. In fact, despite not being young any more, alas, I feel that\nI invariably learn something new by reading those pieces, which, at the very least,\nremind me of the things that I should be doing, and that perhaps I am notdoing,\nto uphold high standards in my job.\nBased on my partiality for scientific advice and stories, it is not overly surpris-\ning that I was struck by the thought that it would be interesting to ask the EATCS\nFellows for\n\u000fthe advice they would give to a student interested in theoretical computer\nscience (TCS),\n\u000fthe advice they would give to a young researcher in TCS and\n\u000fa short description of a research topic that excites them at this moment in\ntime (and possibly why).\nIn this article, whose title is inspired by the classic book Advice To A Young Scien-\ntistauthored by the Nobel Prize winner Sir Peter Brian Medawar, I collect the an-\nswers to the above-listed questions I have received from some of the EATCS Fel-\nlows. The real authors of this piece are Mariangiola Dezani-Ciancaglini (Univer-\nsity of Turin), Yuri Gurevich (Microsoft Research), David Harel (Weizmann Insti-\ntute of Science), Monika Henzinger (University of Vienna), Giuseppe F. Italiano\n(University of Rome Tor Vergata), Scott Smolka (Stony Brook University), Paul\nG. Spirakis (University of Liverpool, University of Patras and Computer Tech-\nnology Institute & Press “Diophantus”, Patras) and Wolfgang Thomas (RWTH\nAachen University), whom I thank for their willingness to share their experience\nand wisdom with all the members of the TCS community. In an accompanying\nessay, which follows this one in this issue of the Bulletin, you will find the piece\nI received from Michael Fellows (University of Bergen).\nThe EATCS Fellows are model citizens of the TCS community, have varied\nwork experiences and backgrounds, and span a wide spectrum of research areas.\nOne can learn much about our field of science and about academic life in general\nby reading their thoughts. In order to preserve the spontaneity of their contribu-\ntions, I have chosen to present them in an essentially unedited form. I hope that\nthe readers of this article will enjoy them as much as I have done.\nMariangiola Dezani-Ciancaglini\nThe advice I would give to a student interested in TCS is: Your studies will be\nsatisfactory only if understanding for you is fun, not a duty.\nTo a young researcher in TCS I would say, “Do not be afraid if you do not see\napplications of the theory you are investigating: the history of computer science\nshows that elegant theories developed with passion will have eventually long-\nlasting success.”\nA research topic that currently excites me is the study of behavioural types.\nThese types allow for fine-grained analysis of communication-centred computa-\ntions. The new generation of behavioural types should allow programmers to\nwrite the certified, self-adapting and autonomic code that the market is requiring.\nYuri Gurevich\nAdvice I would give to a student interested in TCS Attending math seminars\n(mostly in my past), I noticed a discord. Experts in areas like complex analysis\nor PDEs (partial di \u000berential equations) typically presume that everybody knows\nFourier transforms, di \u000berential forms, etc., while logicians tend to remind the au-\ndience of basic definitions (like what’s first-order logic) and theorems (e.g. the\ncompactness theorem). Many talented mathematicians didn’t take logic in their\ncollege years, and they need those reminders. How come? Why don’t they ef-\nfortlessly internalize those definitions and theorems once and for all? This is not\nbecause those definitions and theorems are particularly hard (they are not) but\nbecause they are radically di \u000berent from what they know. It is easier to learn radi-\ncally di \u000berent things — whether it is logic or PDEs or AI — in your student years.\nOpen your mind and use this opportunity!\nAdvice I would give a young researcher in TCS As the development of\nphysics caused a parallel development of physics-applied mathematics, so the de-\nvelopment of computer science and engineering causes a parallel development of\ntheoretical computer science. TCS is an applied science. Applications justify it\nand give it value. I would counsel to take applications seriously and honestly.\nNot only immediate applications, but also applications down the line. Of course,\nlike in mathematics, there are TCS issues of intrinsic value. And there were cases\nwhen the purest mathematics eventually was proven valuable and applied. But in\nmost cases, potential applications not only justify research but also provide guid-\nance of sorts. Almost any subject can be developed in innumerable ways. But\nwhich of those ways are valuable? The application guidance is indispensable.\nI mentioned computer engineering above for a reason. Computer science is\ndi\u000berent from natural science like physics, chemistry, biology. Computers are\nartifacts, not “naturefacts.” Hence the importance of computer science and engi-\nneering as a natural area whose integral part is computer science.\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) Right now, the topics that excite me most are quantum\nmechanics and quantum computing. I wish I could say that this is the result of a\nnatural development of my research. But this isn’t so. During my long career, I\nmoved several times from one area to another. Typically it was natural; e.g. the\ntheory of abstract state machines developed in academia brought me to industry.\nBut the move to quanta was spontaneous. There was an opportunity (they started a\nnew quantum group at the Microsoft Redmond campus a couple of years ago), and\nI jumped upon it. I always wanted to understand quantum theory but occasional\nreading would not help as my physics had been poor to none and I haven’t been\nexposed much to the mathematics of quantum theory. In a sense I am back to\nbeing a student and discovering a new world of immense beauty and mystery,\nexcept that I do not have the luxury of having time to study things systematically.\nBut that is fine. Life is full of challenges. That makes it interesting.\nDavid Harel\nAdvice I would give to a student interested in TCS If you are already enrolled\nin a computer science program, then unless you feel you are of absolutely stellar\ntheoretical quality and the real world and its problems do not attract you at all,\nI’d recommend that you spend at least 2 =3 of your course e \u000borts on a variety of\ntopics related to TCS but not “theory for the sake of theory”. Take lots of courses\non languages, verification AI, databases, systems, hardware, etc. But clearly don’t\nshy away from pure mathematics. Being well-versed in a variety of topics in\nmathematics can only do you good in the future. If you are still able to choose a\nstudy program, go for a combination: TCS combined with software and systems\nengineering, for example, or bioinformatics /systems biology. I feel that computer\nscience (not just programming, but the deep underlying ideas of CS and systems)\nwill play a role in the science of the 21st century (which will be the century of\nthe life sciences) similar to that played by mathematics in the science of the 20th\ncentury (which was the century of the physical sciences).\nAdvice I would give a young researcher in TCS Much of the above is relevant\nto young researchers too. Here I would add the following two things. First, if you\nare doing pure theory, then spend at least 1 =3 of your time on problems that are\nsimpler than the real hard one you are trying to solve. You might indeed succeed in\nsettling the P =NP? problem, or the question of whether PTIME on general finite\nstructures is r.e., but you might not. Nevertheless, in the latter case you’ll at least\nhave all kinds of excellent, if less spectacular, results under your belt. Second,\nif you are doing research that is expected to be of some practical value, go talk\nto the actual people “out there”: engineers, programmers, system designers, etc.\nConsult for them, or just sit with them and see their problems first-hand. There is\nnothing better for good theoretical or conceptual research that may have practical\nvalue than dirtying your hands in the trenches.\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) I haven’t done any pure TCS for 25 years, although\nin work my group and I do on languages and software engineering there is quite\na bit of theory too, as is the case in our work on biological modeling. However,\nfor many years, I’ve had a small but nagging itch for trying to make progress on\nthe problem of artificial olfaction — the ability to record and remotely produce\nfaithful renditions of arbitrary odors. This is still a far-from-solved issue, and is\nthe holy grail of the world of olfaction. Addressing it involves chemistry, biology,\npsychophysics, engineering, mathematics and algorithmics (and is a great topic\nfor young TCS researchers!). More recently, I’ve been thinking about the question\nof how to test the validity of a candidate olfactory reproduction system, so that we\nhave an agreed-upon criterion of success for when such systems are developed. It\nis a kind of common-sense question, but one that appears to be very interesting,\nand not unlike Turing’s 1950 quest for testing AI, even though such systems were\nnowhere in sight at the time. In the present case, trying to compare testing artificial\nolfaction to testing the viability of sight and sound reproduction will not work, for\nmany reasons. After struggling with this for quite a while, I now have a proposal\nfor such a test, which is under review.\nMonika Henzinger\n\u000fStudents interested in TCS should really like their classes in TCS and be\ngood at mathematics.\n\u000fI advice young researchers in TCS to try to work on important problems\nthat have a relationship to real life.\n\u000fCurrently I am interested in understanding the exact complexity of di \u000berent\ncombinatorial problems in P(upper and lower bounds).\nGiuseppe F. Italiano\nThe advice I would give to a student interested in TCS There’s a great quote\nby Thomas Huxley: “Try to learn something about everything and everything\nabout something.” When working through your PhD, you might end up focusing\non a narrow topic so that you will fully understand it. That’s really great! But one\nof the wonderful things about Theoretical Computer Science is that you will still\nhave the opportunity to learn the big picture!\nThe advice I would give a young researcher in TCS Keep working on the\nproblems you love, but don’t be afraid to learn things outside of your own area.\nOne good way to learn things outside your area is to attend talks (and even con-\nferences) outside your research interests. You should always do that!\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) I am really excited by recent results on conditional\nlower bounds, sparkled by the work of Virginia Vassilevska Williams et al. It\nis fascinating to see how a computational complexity conjecture such as SETH\n(Strong Exponential Time Hypothesis) had such an impact on the hardness results\nfor many well-known basic problems.\nScott Smolka\nAdvice I would give to a student interested in TCS Not surprising, it all starts\nwith the basics: automata theory, formal languages, algorithms, complexity the-\nory, programming languages and semantics.\nAdvice I would give a young researcher in TCS Go to conferences and estab-\nlish connections with more established TCS researchers. Seek to work with them\nand see if you can arrange visits at their home institutions for a few months.\nA short description of a research topic that excites me at this moment in time\n(and possibly why) Bird flocking and V-formation are topics I find very excit-\ning. Previous approaches to this problem focused on models of dynamic behavior\nbased on simple rules such as: Separation (avoid crowding neighbors), Alignment\n(steer towards average heading of neighbors), and Cohesion (steer towards aver-\nage position of neighbors). My collaborators and I are instead treating this as a\nproblem of Optimal Control, where the fitness function takes into account Velocity\nMatching (alignment), Upwash Benefit (birds in a flock moving into the upwash\nregion of the bird(s) in front of them), and Clear View (birds in the flock having\nunobstructed views). What’s interesting about this problem is that it is inherently\ndistributed in nature (a bird can only communicate with its nearest neighbors),\nand one can argue that our approach more closely mimics the neurological pro-\ncess birds use to achieve these formations.\nPaul G Spirakis\nMy advice to a student interested in TCS Please be sure that you really like\nTheory! The competition is high, you must love mathematics, and the money\nprospects are usually not great. The best years of life are the student years. Theory\nrequires dedication. Are you ready for this?\nGiven the above, try to select a good advisor (with whom you can interact well\nand frequently). The problem you choose to work on should psyche you and your\nadvisor!\nIt is good to obtain a spherical and broad knowledge of the various Theory\nsubdomains. Surprisingly, one subfield a \u000bects another in unexpected ways.\nFinally, study and work hard and be up to date with respect to results and\ntechniques!\nMy advice to a young researcher interested in TCS Almost all research prob-\nlems have some di \u000eculty. But not all of them are equally important! So, please\nselect your problems to solve carefully! Ask yourself and others: why is this a\nnice problem? Why is it interesting and to which community? Be strategic!\nAlso, a problem is good if it is manageable in a finite period of time. This\nmeans that if you try to solve something open for many years, be sure that you\nwill need great ideas, and maybe lots of time! However, be ambitious! Maybe\nyou will get the big solution! The issue of ambition versus reasonable progress is\nsomething that you must discuss with yourself!\nIt is always advisable to have at least two problems to work on, at any time.\nWhen you get tired from the main front, you turn on your attention to the other\nproblem.\nTry to interact and to announce results frequently, if possible in the best fo-\nrums. Be visible! It is important that other good people know about you. “Speak\nout to survive!”\nStudy hard and read the relevant literature in depth. Try to deeply understand\ntechniques and solution concepts and methods. Every paper you read may lead\nto a result of yours if you study it deeply and question every line carefully! Find\nquiet times to study hard. Control your time!\nA field that excites me: the discrete dynamics of probabilistic (finite) popu-\nlation protocols Population Protocols are a recent model of computation that\ncaptures the way in which complex behavior of systems can emerge from the un-\nderlying local interactions of agents. Agents are usually anonymous and the local\ninteraction rules are scalable (independent of the size, n, of the population). Such\nprotocols can model the antagonism between members of several “species” and\nrelate to evolutionary games.\nIn the recent past I was involved in joint research studying the discrete dynam-\nics of cases of such protocols for finite populations. Such dynamics are, usually,\nprobabilistic in nature, either due to the protocol itself or due to the stochastic na-\nture of scheduling local interactions. Examples are (a) the generalized Moran pro-\ncess (where the protocol is evolutionary because a fitness parameter is crucially\ninvolved) (b) the Discrete Lotka-V olterra Population Protocols (and associated\nCyclic Games) and (c) the Majority protocols for random interactions.\nSuch protocols are usually discrete time transient Markov Chains. However\nthe detailed states description of such chains is exponential in size and the state\nequations do not facilitate a rigorous approach. Instead, ideas related to filtering,\nstochastic domination and Potentials (leading to Martingales) may help in under-\nstanding the dynamics of the protocols.\nSome such dynamics can describe strategic situations (games): Examples in-\nclude Best-Response Dynamics, Peer-to-Peer Market dynamics, fictitious play\netc.\nSuch dynamics need rigorous approaches and new concepts and techniques.\nThe ‘traditional’ approach with di \u000berential equations (found in e.g. evolutionary\ngame theory books) is not enough to explain what happens when such dynamics\ntake place (for example) in finite graphs with the players in the nodes and with\ninteractions among neighbours. Some main questions are: How fast do such dy-\nnamics converge? What is a ‘most probable’ eventual state of the protocols (and\nthe computation of the probability of such states). In case of game dynamics, what\nis the kind of ‘equilibria’ to which they converge? Can we design ‘good’ discrete\ndynamics (that converge fast and go to desirable stable states ?). What is the\ncomplexity of predicting most probable or eventual behaviour in such dynamics?\nSeveral aspects of such discrete dynamics are wide open and it seems that the\nalgorithmic thought can contribute to the understanding of this emerging subfield\nof science.\nWolfgang Thomas, “Views on work in TCS”\nAs one of the EATCS fellows I have been asked to contribute some personal words\nof advice for younger people and on my research interests. Well, I try my best.\nRegarding advice to a student and young researcher interested in TCS, I start\nwith two short sentences:\n\u000fRead the great masters (even when their h-index is low).\n\u000fDon’t try to write ten times as many papers as a great master did.\nAnd then I add some words on what influenced me when I started research — you\nmay judge whether my own experiences that go back to “historical” times would\nstill help you.\nBy the way, advice from historical times, where blackboards and no projectors\nwere used, posed in an entertaining but clearly wise way, is Gian-Carlo Rota’s pa-\nper “Ten Lessons I Wish I Had Been Taught” ( http://www.ams.org/notices/\n199701/comm-rota.pdf ). This is a view of a mathematician but still worth read-\ning and delightful for EATCS members. People like me (68 years old) are also\naddressed — in the last lesson “Be Prepared for Old Age”. . .\nBack in the 1970’s when I started I wanted to do something relevant. For\nme this meant that there should be some deeper problems involved, and that the\nsubject of study is of long-term interest. I was attracted by the works of Büchi and\nRabin just because of this: That was demanding, and it treated structures that will\nbe important also in hundred years: the natural numbers with successor, and the\ntree of all words (over some alphabet) with successor functions that represent the\nattachment of letters.\nThe next point is a variation of this. It is a motto I learnt from Büchi, and it\nis a warning not to join too small communities where the members just cite each\nother. In 1977, when he had seen my dissertation work, Büchi encouraged me to\ncontinue but also said: Beware of becoming member of an MAS, and he explained\nthat this means “mutual admiration society”. I think that his advice was good.\nI am also asked to say something about principles for the postdoctoral phase.\nIt takes determination and devotion to enter it. I can say just two things, from my\nown experience as a young person and from later times. First, as it happens with\nmany postdocs, in my case it was unclear up to the very last moment whether I\nwould get a permanent position. In the end I was lucky. But it was a strain. I al-\nready prepared for a gymnasium teacher’s career. And when on a scientific party\nI spoke to Saharon Shelah (one of the giants of model theory) about my worries,\nhe said “well, there is competition”. How true. So here I just say: Don’t give\naway your hopes — and good luck. The other point is an observation from my\ntime as a faculty member, and it means that good luck may be actively supported.\nWhen a position is open the people in the respective department do not just want\na brilliant researcher and teacher but also a colleague. So it is an important ad-\nvantage when one can prove that one has more than just one field where one can\nactively participate, that one can enter new topics (which anyway is necessary in\na job which lasts for decades), and that one can cooperate (beyond an MAS). So\nfor the postdoc phase this means to look for a balance between work on your own\nand work together with others, and if possible in di \u000berent teams of cooperation.\nFinally, a comment on a research topic that excites me at this moment. I find it\ninteresting to extend more chapters of finite automata theory to the infinite. This\nhas been done intensively in two ways already — we know automata with infinite\n“state space” (e.g., pushdown automata where “states” are combined from control\nstates and stack contents), and we know automata over infinite words (infinite\nsequences of symbols from a finite alphabet). Presently I am interested in words\n(or trees or other objects) where the alphabet is infinite, for example where a letter\nis a natural number, and in general where the alphabet is given by an infinite\nmodel-theoretic structure. Infinite words over the alphabet Nare well known in\nmathematics since hundred years (they are called points of the Baire space there).\nIn computer science, one is interested in algorithmic results which have not been\nthe focus in classical set theory and mathematics, so much is to be done here.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kpcz34Px6h",
"year": null,
"venue": "Bull. EATCS 2006",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kpcz34Px6h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Eight Open Problems in Distributed Computing",
"authors": [
"James Aspnes",
"Costas Busch",
"Shlomi Dolev",
"Panagiota Fatourou",
"Chryssis Georgiou",
"Alexander A. Shvartsman",
"Paul G. Spirakis",
"Roger Wattenhofer"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qX0hZyJ9M9l",
"year": null,
"venue": "Bull. EATCS 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=qX0hZyJ9M9l",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Viewpoints on \"Logic activities in Europe\", twenty years later",
"authors": [
"Luca Aceto",
"Thomas A. Henzinger",
"Joost-Pieter Katoen",
"Wolfgang Thomas",
"Moshe Y. Vardi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pw1dcNPcS5zR",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=pw1dcNPcS5zR",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Report on SEA 2018",
"authors": [
"Gianlorenzo D'Angelo",
"Mattia D'Emidio"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KieghpToLy",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=KieghpToLy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks",
"authors": [
"Emilio Cruciani",
"Emanuele Natale",
"André Nusser",
"Giacomo Scornavacca"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LAryfwb0NF",
"year": null,
"venue": "Bull. EATCS 2022",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/735/777",
"forum_link": "https://openreview.net/forum?id=LAryfwb0NF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Interviews with the 2022 CONCUR Test-of-Time Award Recipients",
"authors": [
"Luca Aceto",
"Orna Kupferman",
"Mickael Randour",
"Davide Sangiorgi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Interviews with the 2022 CONCUR\nTest-of-TimeAward Recipients\nLuca Aceto\nICE-TCS, Department of Computer Science,\nReykjavik University\nGran Sasso Science Institute, L’Aquila\[email protected] ,[email protected]\nOrna Kupferman\nSchool of Computer Science and Engineering\nHebrew University, Jerusalem\[email protected]\nMickael Randour\nFaculty of Science, Mathematics Department\nUniversité de Mons\[email protected]\nDavide Sangiorgi\nDepartment of Computer Science, University of Bologna\[email protected]\nIn 2020, the CONCUR conference series instituted its Test-of-Time Award,\nwhose purpose is to recognise important achievements in Concurrency Theory\nthat were published at the CONCUR conference and have stood the test of time.\nThis year, the following four papers were chosen to receive the CONCUR Test-\nof-Time Awards for the periods 1998–2001 and 2000–2003 by a jury consisting\nof Ilaria Castellani (chair), Paul Gastin, Orna Kupferman, Mickael Randour and\nDavide Sangiorgi. (The papers are listed in chronological order.)\n\u000fChristel Baier, Joost-Pieter Katoen and Holger Hermanns. Approximate\nsymbolic model checking of continuous-time Markov chains. CONCUR\n1999.\n\u000fFranck Cassez and Kim Guldstrand Larsen. The Impressive Power of Stop-\nwatches. CONCUR 2000.\n\u000fJames J. Leifer and Robin Milner. Deriving Bisimulation Congruences for\nReactive Systems. CONCUR 2000.\n\u000fLuca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and\nMariëlle Stoelinga. The Element of Surprise in Timed Games. CONCUR\n2003.\nThis article is devoted to interviews with the recipients of the Test-of-Time Award.\nMore precisely,\n\u000fOrna Kupferman interviewed Christel Baier, Joost-Pieter Katoen and Hol-\nger Hermanns;\n\u000fLuca Aceto interviewed Franck Cassez and Kim Guldstrand Larsen;\n\u000fDavide Sangiorgi interviewed James Leifer; and\n\u000fLuca Aceto and Mickael Randour jointly interviewed Luca de Alfaro, Mar-\nco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga.\nWe are very grateful to the awardees for their willingness to answer our questions\nand hope that the readers of this article will enjoy reading the interviews as much\nas we did.\nInterview with C. Baier, J.-P. Katoen and H. Her-\nmanns\nIn what follows, BHK refers to Baier, Katoen and Hermanns.\nOrna: You receive the CONCUR Test-of-Time Award 2022 for your paper “Ap-\nproximate symbolic model checking of continuous-time Markov chains,” which\nappeared at CONCUR 19981. In that article, you combine three di \u000berent chal-\nlenges: symbolic algorithms, real-time systems, and probabilistic systems. Could\nyou briefly explain to our readers what the main challenge in such a combination\nis?\nBHK: The main challenge is to provide a fixed-point characterization of time-\nbounded reachability probabilities: the probability to reach a given target state\n1Seehttps://link.springer.com/content/pdf/10.1007/3-540-48320-9_12.pdf .\nwithin a given deadline. Almost all works in the field up to 1999 treated discrete-\ntime probabilistic models and focused on “just” reachability probabilities: what is\nthe probability to eventually end up in a given target state? This can be character-\nized as a unique solution of a linear equation system. The question at stake was:\nhow to incorporate a real-valued deadline d? The main insight was to split the\nproblem in staying a certain amount of time, xsay, in the current state and using\nthe remaining d\u0000xtime to reach the target from its successor state. This yields a\nV olterra integral equation system; indeed time-bounded reachability probabilities\nare unique solutions of such equation systems. In the CONCUR 1999 paper we\nsuggested to use symbolic data structures to do the numerical integration; later we\nfound out that much more e \u000ecient techniques can be applied.\nOrna: Could you tell us how you started your collaboration on the award-winning\npaper? In particular, as the paper combines three di \u000berent challenges, is it the case\nthat each of you has brought to the research di \u000berent expertise?\nBHK: Christel and Joost-Pieter were both in Birmingham, where a meeting of a\ncollaboration project between German and British research groups on stochastic\nsystems and process algebra took place. There the first ideas of model checking\ncontinuous-time Markov chains arose, especially for time-bounded reachability:\nwith stochastic process algebras there were means to model CTMCs in a compo-\nsitional manner, but verification was lacking. Back in Germany, Holger suggested\nto include a steady-state operator, the counterpart of transient properties that can\nbe expressed using timed reachability probabilities. We then also developed the\nsymbolic data structure to support the verification of the entire logic.\nOrna: Your contribution included a generalization of BDDs (binary decision dia-\ngrams) to MTDDs (multi-terminal decision diagrams), which allow both Boolean\nand real-valued variables. What do you think about the current state of symbolic\nalgorithms, in particular the choice between SAT-based methods and methods that\nare based on decision diagrams?\nBHK: BDD-based techniques entered probabilistic model checking in the mid\n1990’s for discrete-time models such as Markov chains. Our paper was one of\nthe first, perhaps even the first, that proposed to use BDD structures for real-time\nstochastic processes. Nowadays, SAT and in particular SMT-based techniques be-\nlong to the standard machinery in probabilistic model checking. SMT techniques\nare, e.g., used in bisimulation minimization at the language level, counterexample\ngeneration, and parameter synthesis. This includes both linear as well as non-\nlinear theories. BDD techniques are still used, mostly in combination with sparse\nrepresentations, but it is fair to say that SMT is becoming more and more relevant.\nOrna: What are the research topics that you find most interesting right now? Is\nthere any specific problem in your current field of interest that you’d like to see\nsolved?\nBHK: This depends a bit on whom you ask! Christel’s recent work is about cause-\ne\u000bect reasoning and notions of responsibility in the verification context. This ties\ninto the research interest of Holger who looks at the foundations of perspicuous\nsoftware systems. This research is rooted in the observation that the explosion of\nopportunities for software-driven innovations comes with an implosion of human\nopportunities and capabilities to understand and control these innovations. Joost-\nPieter focuses on pushing the borders of automation in weakest-precondition rea-\nsoning of probabilistic programs. This involves loop invariant synthesis, prob-\nabilistic termination proofs, the development of deductive verifiers, and so forth.\nChallenges are to come up with good techniques for synthesizing quantitative loop\ninvariants, or even complete probabilistic programs.\nOrna: What advice would you give to a young researcher who is keen to start\nworking on topics related to symbolic algorithms, real-time systems, and proba-\nbilistic systems?\nBHK: Try to keep it smart and simple.\nInterview with Franck Cassez and Kim Guldstrand\nLarsen\nLuca: You receive the CONCUR Test-of-Time Award 2022 for your paper “The\nImpressive Power of Stopwatches”2, which appeared at CONCUR 2000. In that\narticle, you showed that timed automata enriched with stopwatches and unob-\nservable time delays have the same expressive power of linear hybrid automata.\nCould you briefly explain to our readers what timed automata with stopwatches\nare? Could you also tell us how you came to study the question addressed in\nyour award-winning article? Which of the results in your paper did you find most\nsurprising or challenging?\nKim: Well, in timed automata all clocks grow with rate 1 in all locations of the\nautomata. Thus you can tell the amount of time that has elapsed since a particular\nclock was last reset, e.g., due to an external event of interest. A stopwatch is a real-\nvalued variable similar to a regular clock. In contrast to a clock, a stopwatch will\nin certain locations grow with rate 1 and in other locations grow with rate 0, i.e.,\nit is stopped. As such, a stopwatch gives you information about the accumulated\ntime spent in a certain parts of the automata.\nIn modelling schedulability problems for real-time systems, the use of stop-\nwatches is crucial in order to adequately capture preemption. I definitely believe\n2Seehttps://link.springer.com/content/pdf/10.1007/3-540-44618-4_12.pdf .\nthat it was our shared interest in schedulability that brought us to study timed\nautomata with stopwatches. We knew from earlier results by Alur et al. that prop-\nerties such as reachability was undecidable. But what could we do about this?\nAnd how much expressive power would the addition of stopwatches provide?\nIn the paper we certainly put the most emphasis on the latter question, in that\nwe showed that stopwatch automata and linear hybrid automata accept the same\nclass of timed languages, and this was at least for me the most surprising and\nchallenging result. However, focusing on impact, I think the approximate zone-\nbased method that we apply in the paper has been extremely important from the\npoint of view of having our verification tool UPPAAL being taken up at large by\nthe embedded systems community. It has been really interesting to see how well\nthe over-approximation method actually works.\nLuca: In your article, you showed that linear hybrid automata and stopwatch\nautomata accept the same class of timed languages. Would this result still hold if\nall delays were observable? Do the two models have the same expressive power\nwith respect to finer notions of equivalence such as timed bisimilarity, say? Did\nyou, or any other colleague, study that problem, assuming that it is an interesting\none?\nKim: These are definitely very interesting questions, and should be studied.\nAs for finer notions of equivalences, e.g., timed bisimilarity, I believe that our\ntranslation could be shown to be correct up to some timed variant of chunk-by-\nchunk simulation introduced by Anders Gammelgaard in his Licentiat Thesis from\nAarhus University in 19913. That could be a good starting point.\nLuca: Did any of your subsequent research build explicitly on the results and the\ntechniques you developed in your award-winning paper? Which of your subse-\nquent results on timed and hybrid automata do you like best? Is there any result\nobtained by other researchers that builds on your work and that you like in partic-\nular or found surprising?\nKim: Looking up in DBLP, I see that I have some 28 papers containing the word\n“scheduling”. For sure stopwatches will have been used in one way or another in\nthese. One thing that we never really examined thoroughly is to investigate how\nwell the approximate zone-based technique will work when applied to the trans-\nlation of linear hybrid automata into stopwatch automata. This would definitely\nbe interesting to find out.\nThis was the first joint publication between me and Franck. I enjoyed fully the\ncollaboration on all the next 10 joint papers. Here the most significant ones are\nprobably the paper at CONCUR 2005, where we presented the symbolic on-the-\nfly algorithms for synthesis for timed games and the branch UPPAAL TIGA. And\n3Seehttps://tidsskrift.dk/daimipb/article/view/6611/5733 .\nlater in a European project GASICS with Jean-Francois Raskin, we used TIGA in\nthe synthesis of optimal and robust control of a hydraulic system.\nFranck: Using the result in our paper, we can analyse scheduling problems where\ntasks can be stopped and restarted, using real-time model-checking and a tool like\nUPPAAL.\nTo do so, we build a network of stopwatch automata modelling the set of tasks\nand a scheduling policy, and reduce schedulability to a safety verification problem:\navoid reaching states where tasks do not meet their deadlines. Because we over-\napproximate the state space, our analysis may yield some false positives and may\nwrongly declare a set of tasks non-schedulable because the over-approximation is\ntoo coarse.\nIn the period 2003–2005, in cooperation with Francois Laroussinie we tried\nto identify some classes of stopwatch automata for which the over-approximation\ndoes not generate false positives. We never managed to find an interesting sub-\nclass.\nThis may look like a serious problem in terms of applicability of our result, but\nin practice, it does not matter too much. Most of the time, we are interested in the\nschedulability of a specific set of tasks (e.g., controlling a plant, a car, etc.) and\nfor these instances, we can use our result: if we have false positives, we can refine\nthe model tasks and scheduler and rule them out. Hopefully after a few iterations\nof refinement, we can prove that the set of tasks is schedulable.\nThe subsequent result on timed and hybrid automata of mine that I probably\nlike best is the one we obtained on solving optimal reachability in timed automata.\nWe had a paper at FSTTCS in 20044presenting the theoretical results, and a com-\npanion paper at GDV 20045with an implementation using HyTech, a tool for\nanalysing hybrid automata.\nI like these results because we ended up with a rather simple proof, after 3–4\nyears working on this hard problem.\nLuca: Could you tell us how you started your collaboration on the award-winning\npaper? I recall that Franck was a regular visitor to our department at Aalborg Uni-\nversity for some time, but I can’t recall how his collaboration with the UPPAAL\ngroup started.\nKim: I am not quite sure I remember how and when I first met Franck. For some\ntime we already worked substantially with French researchers, in particular from\nLSV Cachan (Francois Larroussinie and Patricia Bouyer). I have the feeling that\nthere were quite some strong links between Nantes (were Franck was) and LSV\non timed systems in those days. Also Nantes was the organizer of the PhD school\n4Seehttps://doi.org/10.1007/978-3-540-30538-5_13 .\n5Seehttps://doi.org/10.1016/j.entcs.2004.07.006 .\nMOVEP five times in the period 1994-2002, and I was lecturing there in one of the\nyears, meeting Olivier Roux and Franck who were the organizers. Funny enough,\nthis year we are organizing MOVEP in Aalborg. Anyway, at some point Franck\nbecame a regular visitor to Aalborg, often for long periods of time—playing on\nthe squash team of the city when he was not working.\nFranck: As Kim mentioned, I was in Nantes at that time, but I was working\nwith Francois Laroussinie who was in Cachan. Francois had spent some time in\nAalborg working with Kim and his group and he helped organise a mini workshop\nwith Kim in 1999, in Nantes. That’s when Kim invited me to spend some time\nin Aalborg, and I visited Aalborg University for the first time from October 1999\nuntil December 1999. This is when we worked on the stopwatch automata paper.\nWe wanted to use UPPAAL to verify systems beyond timed automata.\nI visited Kim and his group almost every year from 1999 until 2007, when I\nmoved to Australia. There were always lots of visitors at Aalborg University and\nI was very fortunate to be there and learn from the Masters.\nI always felt at home at Aalborg University, and loved all my visits there. The\nonly downside was that I never managed to defeat Kim at badminton. I thought it\nwas a gear issue, but Kim gave me his racket (I still have it) and the score did not\nchange much.\nLuca: What are the research topics that you find most interesting right now? Is\nthere any specific problem in your current field of interest that you’d like to see\nsolved?\nKim: Currently I am spending quite some time on marrying symbolic synthesis\nwith reinforcement learning for Timed Markov Decision Processes in order to\nachieve optimal as well as safe strategies for Cyber-Physical Systems.\nLuca: Both Franck and you have a very strong track record in developing theoret-\nical results and in applying them to real-life problems. In my, admittedly biased,\nopinion, your work exemplifies Ben Schneiderman’s Twin-Win Model6, which\npropounds the pursuit of “the dual goals of breakthrough theories in published pa-\npers and validated solutions that are ready for widespread dissemination.” Could\nyou say a few words on your research philosophy?\nKim: I completely subscribe to this. Several early theoretical findings, such as\nthe paper on stopwatch automata, have been key in our sustainable transfer to\nindustry.\nFranck: Kim has been a mentor to me for a number of years now, and I certainly\nlearned this approach /philosophy from him and his group.\n6Seehttps://www.pnas.org/doi/pdf/10.1073/pnas.1802918115 .\nWe always started from a concrete problem, e.g., scheduling tasks /checking\nschedulability, and to validate the solutions, building a tool to demonstrate appli-\ncability. The next step was to improve the tool to solve larger and larger problems.\nUPPAAL is a fantastic example of this philosophy: the reachability problem\nfor timed automata is PSPACE-complete. That would deter a number of people to\ntry and build tools to solve this problem. But with smart abstractions, algorithms\nand data-structures, and constant improvement over a number of years, UPPAAL\ncan analyse very large and complex systems. It is amazing to see how UPPAAL\nis used in several areas from tra \u000ec control to planning and to precisely guiding a\nneedle for an injection.\nLuca: What advice would you give to a young researcher who is keen to start\nworking on topics related to formal methods?\nKim: Come to Aalborg, and participate in next year’s MOVEP.\nInterview with James Leifer\nDavide: How did the work presented in your CONCUR Test-of-Time paper come\nabout?\nJames: I was introduced to Robin Milner by my undergraduate advisor Bernard\nSufrin around 1994. Thanks to that meeting, I started with Robin at Cambridge\nin 1995 as a fresh Ph.D. student. Robin had recently moved from Edinburgh and\nhad a wonderful research group, including, at various times, Peter Sewell, Adri-\nana Compagnoni, Benjamin Pierce, and Philippa Gardner. There were also many\ncolleagues working or visiting Cambridge interested in process calculi: Davide\nSangiorgi, Andy Gordon, Luca Cardelli, Martín Abadi,. . . . It was an exciting at-\nmosphere! I was particularly close to Peter Sewell, with whom I discussed the\nideas here extensively and who was generous with his guidance.\nThere was a trend in the community at the time of building complex process\ncalculi (for encryption, Ambients, etc.) where the free syntax would be quotiented\nby a structural congruence to “stir the soup” and allow di \u000berent parts of a tree\nto float together; reaction rules (unlabelled transitions) then would permit those\nagglomerated bits to react, to transform into something new.\nRobin wanted to come up with a generalised framework, which he called Ac-\ntion Calculi, for modelling this style of process calculi. His framework would\ndescribe graph-like “soups” of atoms linked together by arcs representing bind-\ning and sharing; moreover the atoms could contain subgraphs inside of them for\nfreezing activity (as in prefixing in the \u0019-calculus), with the possibility of bound-\nary crossing arcs (similarly to how \u0017-bound names in \u0019-calculus can be used in\ndeeply nested subterms).\nRobin had an amazing talent for drawing beautiful graphs! He would “move”\nthe nodes around on the chalkboard and reveal how a subgraph was in fact a\nreactum (the left-hand side of an unlabelled transition). In the initial phases of my\nPh.D. I just tried to understand these graphs: they were so natural to draw on the\nblackboard! And yet, they were also so uncomfortable to use when written out in\nlinear tree- and list-like syntax, with so many distinct concrete representations for\nthe same graph.\nPutting aside the beauty of these graphs, what was the benefit of this frame-\nwork? If one could manage to embed a process calculus in Action Calculi, using\nthe graph structure and fancy binding and nesting to represent the quotiented syn-\ntax, what then? We dreamt about a proposition along the following lines: if you\nrepresent your syntax (quotiented by your structural congruence) in Action Cal-\nculi graphs, and you represent your reaction rules as Action Calculi graph rewrites,\nthen we will give you a congruential bisimulation for free!\nCompared to CCS for example, many of the rich new process calculi lacked\nlabelled transitions systems. In CCS, there was a clean, simple notion of labelled\ntransitions and, moreover, bisimulation over those labelled transitions yielded a\ncongruence: for all processes PandQ, and all process contexts C[\u0000], ifP\u0018Q,\nthen C[P]\u0018C[Q]. This is a key quality for a bisimulation to possess, since it\nallows modular reasoning about pieces of a process, something that’s so much\nharder in a concurrent world than in a sequential one.\nReturning to Action Calculi, we set out to make good on the dream that ev-\neryone gets a congruential bisimulation for free! Our idea was to find a general\nmethod to derive labelled transitions systems from the unlabelled transitions and\nthen to prove that bisimulation built from those labelled transitions would be a\ncongruence.\nThe idea was often discussed at that time that there was a duality whereby a\nprocess undergoing a labelled transition could be thought of as the environment\nproviding a complementary context inducing the process to react. In the early\nlabelled transition system in \u0019-calculus for example, I recall hearing that Pun-\ndergoing the input labelled transition xycould be thought of as the environment\noutputting payload yon channel xto enable a \u001ctransition with P.\nSo I tried to formalise this notion that labelled transitions are environmental\ncontexts enabling reaction, i.e. defining PC[\u0000]!P0to mean C[P]!P0provided\nthatC[\u0000] was somehow “minimal”, i.e., contained nothing superfluous beyond\nwhat was necessary to trigger the reaction. We wanted to get a rigorous definition\nof that intuitive idea. There was a long and di \u000ecult period (about 12 months)\nwandering through the weeds trying to define minimal contexts for Action Calculi\ngraphs (in terms of minimal nodes and minimal arcs), but it was hugely complex,\nfrustrating, and ugly and we seemed no closer to the original goal of achieving\ncongruential bisimulation with these labelled transitions systems.\nEventually I stepped back from Action Calculi and started to work on a more\ntheoretical definition of “minimal context” and we took inspiration from category\ntheory. Robin had always viewed Action Calculi graphs as categorical arrows\nbetween objects (where the objects represented interfaces for plugging together\narcs). At the time, there was much discussion of category theory in the air (for\ngame theory); I certainly didn’t understand most of it but found it interesting and\ninspiring.\nIf we imagine that processes and process-contexts are just categorical arrows\n(where the objects are arities) then context composition is arrow composition.\nNow, assuming we have a reaction rule R!R0, we can define labelled transi-\ntions PC[\u0000]!P0as follows: there exists a context Dsuch that C[P]=D[R] and\nP0=D[R0]. The first equality is a commuting diagram and Robin and I thought\nthat we could formalise minimality by something like a categorical pushout! But\nthat wasn’t quite right as CandDare not the minimum pair (compared to all\nother candidates), but a minimal pair: there may be many incomparable mini-\nmal pairs all of which are witnesses of legitimate labelled transitions. There was\nagain a long period of frustration eventually resolved when I reinvented “relative\npushouts” (in place of pushouts). They are a simple notion in slice categories but\nI didn’t know that until later. . . .\nHaving found a reasonable definition of “minimal”, I worked excitedly on\nbisimulation, trying to get a proof of congruence: P\u0018Qimplies E[P]\u0018E[Q].\nFor weeks, I was considering the labelled transitions of E[P]F[\u0000]!and all the ways\nthat could arise. The most interesting case is when a part of P, a part of E, and Fall\n“conspire” together to generate a reaction. From that I was able to derive a labelled\ntransition of Pby manipulating relative pushouts, which by hypothesis yielded\na labelled transition of Q, and then, via a sort of “pushout pasting”, a labelled\ntransition E[Q]F[\u0000]!. It was a wonderful moment of elation when I pasted all the\ndiagrams together on Robin’s board and we realised that we had the congruence\nproperty for our synthesised labels!\nWe looked back again at Action Calculi, using the notion of relative pushouts\nto guide us (instead of the arbitrary approach we had considered before) and we\nfurther looked at other kinds of process calculi syntax to see how relative pushouts\ncould work there. . . . Returning to the original motivation to make Action Calculi\na universal framework with congruential bisimulation for free, I’m not convinced\nof its utility. But it was the challenge that led us to the journey of the relative\npushout work, which I think is beautiful.\nDavide: What influence did this work have in the rest of your career? How much\nof your subsequent work built on it?\nJames: It was thanks to this work that I visited INRIA Rocquencourt to discuss\nprocess calculi with Jean-Jacques Lévy and Georges Gonthier. They kindly in-\nvited me to spend a year as postdoc in 2001 after I finished my thesis with Robin,\nand I ended up staying in INRIA ever since. I didn’t work on bisimulation again\nas a research topic, but stayed interested in concurrency and distribution for a long\ntime, working with Peter Sewell et al. on distributed language design with module\nmigration and rebinding, and with Cédric Fournet et al. on compiler design for\nautomatically synthesising cryptographic protocols for high level sessions speci-\nfications.\nDavide: Could you tell us about your interactions with Robin Milner? What was\nit like to work with him? What lessons did you learn from him?\nJames: I was tremendously inspired by Robin.\nHe would stand at his huge blackboard, his large hands covered in chalk,\nhis bicycle clips glinting on his trousers, and he would stalk up and down the\nblackboard—thinking and moving. There was something theatrical and artistic\nabout it: his thinking was done in physical movement and his drawings were dy-\nnamic as the representations of his ideas evolved across the board.\nI loved his drawings. They would start simple, a circle for a node, a box\nfor a subgraph, etc. and then develop more and more detail corresponding to his\nintuition. (It reminded me of descriptions I had read of Richard Feynman drawing\nquantum interactions.)\nSometimes I recall being frustrated because I couldn’t read into his formulas\neverything that he wanted to convey (and we would then switch back to drawings)\nor I would be worried that there was an inconsistency creeping in or I just couldn’t\nkeep up, so the board sessions could be a roller coaster ride at times!\nRobin worked tremendously hard and consistently. He would write out and\nrewrite out his ideas, regularly circulating hand written documents. He would re-\nfine over and over his diagrams. Behind his achievements there was an impressive\nconsistency of e \u000bort.\nHe had a lot of confidence to carry on when the sledding was hard. He had\nsuch a strong intuition of what ought to be possible, that he was able to sustain\nyears of e \u000bort to get there.\nHe was generous with praise, with credit, with acknowledgement of others’\nideas. He was generous in sharing his own ideas and seemed delighted when oth-\ners would pick them up and carry them forward. I’ve always admired his openness\nand lack of jealousy in sharing ideas.\nIn his personal life, he seemed to have real compatibility with Lucy (his wife),\nwho also kept him grounded. I still laugh when I remember once working with\nhim at his dining room table and Lucy announcing, “Robin, enough of the mathe-\nmatics. It’s time to mow the lawn!”\nI visited Oxford for Lucy’s funeral and recall Robin putting a brave face on\nhis future plans; I returned a few weeks later when Robin passed away himself. I\nmiss him greatly.\nDavide: What research topics are you most interested in right now? How do you\nsee your work develop in the future?\nJames: I’ve been interested in a totally di \u000berent area, namely healthcare, for\nmany years. I’m fascinated by how patients, and information about them, flows\nthrough the complex human and machine interactions in hospital. When looking\nat how these flows work, and how they don’t, it’s possible to see where errors\narise, where blockages happen, where there are informational and visual deficits\nthat make the job of doctors and nurses di \u000ecult. I like to think visually in terms\nof graphs (incrementally adding detail) and physically moving through the space\nwhere the action happens—all inspired by Robin!\nInterview with Luca de Alfaro, Marco Faella, Tho-\nmas A. Henzinger, Rupak Majumdar and Mariëlle\nStoelinga\nIn what follows, “Luca A.” refers to Luca Aceto, whereas “Luca” is Luca de\nAlfaro.\nLuca A. and Mickael: You receive the CONCUR Test-of-Time Award 2022 for\nyour paper “The Element of Surprise in Timed Games,” which appeared at CON-\nCUR 20037. In that article, you studied concurrent, two-player timed games. A\nkey contribution of your paper is the definition of an elegant timed game model,\nallowing both the representation of moves that can take the opponent by surprise,\nas they are played “faster,” and the definition of natural concepts of winning con-\nditions for the two players—ensuring that players can win only by playing accord-\ning to a physically meaningful strategy. In our opinion, this is a great example of\nhow novel concepts and definitions can advance a research field. Could you tell\nus more about the origin of your model?\nAll authors: Mariëlle and Marco were postdocs with Luca at University of Cali-\nfornia, Santa Cruz, in that period, Rupak was a student of Tom’s, and we were all\nin close touch, meeting very often to work together. We all had worked much on\ngames, and an extension to timed games was natural for us to consider.\n7See https://pub.ist.ac.at/~tah/Publications/the_element_of_surprise_in_\ntimed_games.pdf) .\nIn untimed games, players propose a move, and the moves jointly determine\nthe next game state. In these games there is no notion of real-time. We wanted to\nstudy games in which players could decide not only the moves, but also the instant\nin time when to play them.\nIn timed automata, there is only one “player” (the automaton), which can take\neither a transition, or a time step. The natural generalization would be a game in\nwhich players could propose either a move, or a time step.\nYet, we were unsatisfied with this model. It seemed to us that it was di \u000berent\nto say “Let me wait 14 seconds and reconvene. Then, let me play my King of\nSpades” or “Let me play my King of Spades in 14 seconds.” In the first, by\nstopping after 14 seconds, the player is providing a warning that the card might\nbe played. In the second, there is no such warning. In other words, if players\npropose either a move or a time-step, they cannot take the adversary by surprise\nwith a move at an unanticipated instant. We wanted a model that could capture\nthis element of surprise.\nTo capture the element of surprise, we came up with a model in which players\npropose both a move and the delay with which it is played. After this natural in-\nsight, the di \u000eculty was to find the appropriate winning condition, so that a player\ncould not win by stopping time.\nTom: Besides the infinite state space (region construction etc.), a second issue\nthat is specific to timed systems is the divergence of time. Technically, divergence\nis a built-in Büchi condition (“there are infinitely many clock ticks”), so all safety\nand reachability questions about timed systems are really co-Büchi and Büchi\nquestions, respectively. This observation had been part of my work on timed\nsystems since the early 1990s, but it has particularly subtle consequences for timed\ngames, where no player (and no collaboration of players) should have the power\nto prevent time from diverging. This had to be kept in mind during the exploration\nof the modeling space.\nAll authors: We came up with many possible winning conditions, and for each\nwe identified some undesirable property, except for the one that we published.\nThis is in fact an aspect that did not receive enough attention in the paper; we\npresented the chosen winning condition, but we did not discuss in full detail why\nseveral other conditions that might have seemed plausible did not work.\nIn the process of analyzing the winning conditions, we came up with many\ninteresting games, which form the basis of many results, such as the result on lack\nof determinization, on the need for memory in reachability games (even when\nclock values are part of the state), and most famously as it gave the title to the\npaper, on the power of surprise.\nAfter this fun ride came the hard work, where we had to figure out how to\nsolve these games. We had worked at symbolic approaches to games before, and\nwe followed the approach here, but there were many complex technical adapta-\ntions required. When we look at the paper in the distance of time, it has this\ncombination of a natural game model, but also of a fairly sophisticated solution\nalgorithm.\nLuca A. and Mickael: Did any of your subsequent research build explicitly on\nthe results and the techniques you developed in your award-winning paper? If so,\nwhich of your subsequent results on (timed) games do you like best? Is there any\nresult obtained by other researchers that builds on your work and that you like in\nparticular or found surprising?\nLuca: Marco and I built Ticc, which was meant to be a tool for timed interface the-\nories, based largely on the insights in this paper. The idea was to be able to check\nthe compatibility of real-time systems, and automatically infer the requirements\nthat enable two system components to work well together—to be compatible in\ntime. We thought this would be useful for hardware or embedded systems, and es-\npecially for control systems, and in fact the application is important: there is now\nmuch successful work on the compositionality of StateFlow /Simulink models.\nWe used MTBDDs as the symbolic engine, and Marco and I invented a lan-\nguage for describing the components and we wrote by pair-programming some\nabsolutely beautiful Ocaml code that compiled real-time component models into\nMTBDDs (perhaps the nicest code I have ever written). The problem was that we\nwere too optimistic in our approach to state explosion, and we were never able to\nstudy any system of realistic size.\nAfter this, I became interested in games more in an economic setting, and from\nthere I veered into incentive systems, and from there to reputation systems and to\na three-year period in which I applied reputation systems in practice in industry,\nthus losing somewhat touch with formal methods work.\nMarco: I’ve kept working on games since the award-winning paper, in one way or\nanother. The closest I’ve come to the timed game setting has been with controller\nsynthesis games for hybrid automata. In a series of papers, we had fun designing\nand implementing symbolic algorithms that manipulate polyhedra to compute the\nwinning region of a linear hybrid game. The experience gained on timed games\nhelped me recognize the many subtleties arising in games played in real time on a\ncontinuous state-space.\nMariëlle: I have been working on games for test case generation: One player\nrepresents the tester, which chooses inputs to test; the other player represents the\nSystem-under-Test, and chooses the outputs of the system. Strategy synthesis\nalgorithms can then compute strategies for the tester that maximize all kinds of\nobjectives, e.g., reaching certain states, test coverage etc.\nA result that I really like is that we were able to show a very close correspon-\ndence between the existing testing frameworks and game theoretic frameworks:\nSpecifications act as game arenas; test cases are exactly game strategies, and the\nconformance relation used in testing (namely ioco) coincides with game refine-\nment (i.e., alternating refinement).\nRupak: In an interesting way, the first paper on games I read was the one by\nMaler, Pnueli and Sifakis (STACS 1995)8that had both fixpoint algorithms and\ntimed games (without “surprise”). So the problem of symbolic solutions to games\nand their applications in synthesis followed me throughout my career. I moved\nto finding controllers for games with more general (non-linear) dynamics, where\nwe worked on abstraction techniques. We also realized some new ways to look\nat restricted classes of adversaries. I was always fortunate to have very good\ncollaborators who kept my interest alive with new insights. Very recently, I have\ngotten interested in games from a more economic perspective, where players can\ntry to signal each other or persuade each other about private information but it’s\ntoo early to tell where this will lead.\nLuca A. and Mickael: What are the research topics that you find most interesting\nright now? Is there any specific problem in your current field of interest that you’d\nlike to see solved?\nMariëlle: Throughout my academic life, I have been working on stochastic anal-\nysis, with Luca and Marco, we worked on stochastic games a lot. First only on\ntheory, but later also on industrial applications, especially in the railroad and high-\ntech domain. At some point in time, I realized that my work was actually centred\naround analysing failure probabilities and risk. That is how I moved into risk\nanalysis; the o \u000ecial title of the chair I hold is Risk Management for High Tech\nSystems.\nThe nice thing is: this sells much better than Formal Methods! Almost nobody\nknows what Formal Methods are, and if they know, people think “yes, those di \u000e-\ncult people who urge us to specify everything mathematically.” For risk manage-\nment, this is completely di \u000berent: everybody understands that this is an important\narea.\nLuca: I am currently working on computational ecology, on machine learning\n(ML) for networks, and on fairness in data and ML. In computational ecology, we\nare working on the role of habitat and territory for species viability. We use ML\ntechniques to write “di \u000berentiable algorithms,” where we can compute the e \u000bect\nof each input, such as the kind of vegetation in each square-kilometer of territory,\non the output. If all goes well, this will enable us to e \u000eciently compute which\nregions should be prioritized for protection and habitat conservation.\n8Seehttps://www-verimag.imag.fr/~sifakis/RECH/Synth-MalerPnueli.pdf .\nIn networks, we have been able to show that reinforcement learning can yield\ntremendous throughput gains in wireless protocols, and we are now starting to\nwork on routing and congestion control.\nAnd in fairness and ML, we have worked on the automatic detection of anoma-\nlous data subgroups (something that can be useful in model diagnostics), and we\nare now working on the spontaneous inception of discriminatory behavior in agent\nsystems.\nWhile these do not really constitute a coherent research e \u000bort, I can certainly\nsay that I am having a grand tour of computer science—the kind of joy ride one\ncan a\u000bord with tenure!\nRupak: I have veered between practical and theoretical problems. I am working\non charting the decidability frontier for infinite-state model checking problems\n(most recently, for asynchronous programs and context-bounded reachability).\nI am also working on applying formal methods to the world of cyber-physical\nsystems—mostly games and synthesis. Finally, I have become very interested in\napplying formal methods to large scale industrial systems through a collaboration\nwith Amazon Web Services. There is still a large gap between what is theoretically\nunderstood and what is practically applicable to these systems; and the problems\nare a mix of technical and social.\nLuca A. and Mickael: You have a very strong track record in developing the-\noretical results and in applying them to real-life problems. In our, admittedly\nbiased, opinion, your work exemplifies Ben Schneiderman’s Twin-Win Model,\nwhich propounds the pursuit of “the dual goals of breakthrough theories in pub-\nlished papers and validated solutions that are ready for widespread dissemination.”\nCould you say a few words on your research philosophy? How do you see the in-\nterplay between basic and applied research?\nLuca: This is very kind for you to say, and a bit funny to hear, because certainly\nwhen I was young I had a particular talent for getting lost in useless theoretical\nproblems.\nI think two things played in my favor. One is that I am curious. The other is\nthat I have a practical streak: I still love writing code and tinkering with “things,”\nfrom IoT to biology to web and more. This tinkering was at the basis of many\nof the works I did. My work on reputation systems started when I created a wiki\non cooking; people were vandalizing it, and I started to think about game theory\nand incentives for collaboration, which led to my writing much of the code for\nWikipedia analysis, and at Google, for Maps edits analysis. My work on networks\nstarted with me tinkering with simple reinforcement-learning schemes that might\nwork, and writing the actual code. On the flip side, my curiosity too often had\nthe better of me, so that I have been unable to pay the continuous and devoted\nattention to a single research field. I am not a specialist in any single thing I do or\nI have done. I am always learning the ropes of something I don’t quite know yet\nhow to do.\nMy applied streak probably gave me some insight on which problems might\nbe of more practical relevance, and my frequent field changes have allowed me\nto bring new perspectives to old problems. There were not many people using\nreinforcement learning for wireless networks, there are not many who write ML\nand GPU code and also avidly read about conservation biology.\nRupak: I must say that Tom and Luca were very strong influencers for me in\nmy research: both in problem selection and in appreciating the joy of research. I\nremember one comment of Tom, paraphrased as “Life is short. We should write\npapers that get read.” I spent countless hours in Luca’s o \u000ece and learnt a lot of\nthings about research, co \u000bee, the ideal way to make pasta, and so on.\nMarco: It was an absolute privilege to be part of the group that wrote that paper\n(my 4th overall, according to DBLP). I’d like to thank my coauthors, and Luca in\nparticular, for guiding me during those crucially formative years.\nMariëlle: I fully agree!\nLuca A. and Mickael: Several of you have high-profile leadership roles at your\ninstitutions. What advice would you give to a colleague who is about to take up\nthe role of department chair, director of a research centre, dean or president of a\nuniversity? How can one build a strong research culture, stay research active and\nlive to tell the tale?\nLuca: My colleagues may have better advice; my productivity certainly decreased\nwhen I was department chair, and is lower even now that I am the vice-chair. When\nI was young, I was ambitious enough to think that my scientific work would have\nthe largest impact among the things I was doing. But I soon realized that some of\nthe greatest impact was on others: on my collaborators, on the students I advised,\nwho went on to build great careers and stayed friends, and on all the students I was\nteaching. This awareness serves to motivate and guide me in my administrative\nwork. The Computer Science department at University of California, Santa Cruz,\nis one of the ten largest in the number of students we graduate, and the time I\nspend on improving its organization and the quality of the education it delivers is\nsurely very impactful. My advice to colleagues is to consider their service not as\nan impediment to research, but as one of the most impactful things they do.\nMy way of staying alive is to fence o \u000bsome days that I only dedicate to re-\nsearch (aside from some unavoidable emergency), and also, to have collaborators\nthat give me such joy in working together that they brighten and energize my\nwhole day.\nLuca A. and Mickael: Finally, what advice would you give to a young researcher\nwho is keen to start working on topics related to concurrency theory today?\nLuca: Oh that sounds very interesting! And, may I show you this very interesting\nthing we are doing in Jax to model bird dispersal? We feed in this climate and\nvegetation data, and then we. . . .\nJust kidding. Just kidding. If I come to CONCUR I promise not to lead any of\nthe concurrency yearlings astray. At least I will try.\nMy main advice would be this: work on principles that allow correct-by-\ndesign development. If you look at programming languages and software engi-\nneering, the progress in software productivity has not happened because people\nhave become better at writing and debugging code written in machine language or\nC. It has happened because of the development of languages and software princi-\nples that make it easier to build large systems that are correct by construction. We\nneed the same kind of principles, (modeling) languages, and ideas to build correct\nconcurrent systems. Verification alone is not enough. Work on design tools, ideas\nto guide design, and design languages.\nTom: In concurrency theory we define formalisms and study their properties.\nMost papers do the studying, not the defining: they take a formalism that was de-\nfined previously, by themselves or by someone else, and study a property of that\nformalism, usually to answer a question that is inspired by some practical moti-\nvation. To me, this omits the most fun part of the exercise, the defining part. The\npoint I am trying to make is not that we need more formalisms, but that, if one\nwishes to study a specific question, it is best to study the question on the simplest\npossible formalism that exhibits exactly the features that make the question mean-\ningful. To do this, one often has to define that formalism. In other words, the\nformalism should follow the question, not the other way around. This principle\nhas served me well again and again and led to formalisms such as timed games,\nwhich try to capture the essence needed to study the power of timing in strate-\ngic games played on graphs. So my advice to a young researcher in concurrency\ntheory is: choose your formalism wisely and don’t be afraid to define it.\nRupak: Problems have di \u000berent measures. Some are practically justified (“Is this\npractically relevant in the near future?”) and some are justified by the foundations\nthey build (“Does this avenue provide new insights and tools?”). Di \u000berent com-\nmunities place di \u000berent values on the two. But both kinds of work are important\nand one should recognize that one set of values is not universally better than the\nother.\nMariëlle: As Michael Jordan puts it: “Just play. Have fun. Enjoy the game.”",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "q67drSyqNf",
"year": null,
"venue": "Bull. EATCS 2020",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/629/644",
"forum_link": "https://openreview.net/forum?id=q67drSyqNf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Money Transfer Made Simple: a Specification, a Generic Algorithm, and its Proof",
"authors": [
"Alex Auvolat",
"Davide Frey",
"Michel Raynal",
"François Taïani"
],
"abstract": "It has recently been shown that, contrarily to a common belief, money transfer in the presence of faulty (Byzantine) processes does not require strong agreement such as consensus. This article goes one step further: namely, it first proposes a non-sequential specification of the money-transfer object, and then presents a generic algorithm based on a simple FIFO order between each pair of processes that implements it. The genericity dimension lies in the underlying reliable broadcast abstraction which must be suited to the appropriate failure model. Interestingly, whatever the failure model, the money transfer algorithm only requires adding a single sequence number to its messages as control information. Moreover, as a side effect of the pro- posed algorithm, it follows that money transfer is a weaker problem than the construction of a safe/regular/atomic read/write register in the asynchronous message-passing crash-prone model.",
"keywords": [],
"raw_extracted_content": "TheDistributed Computing Column\nby\nStefanSchmid\nUniversity of Vienna\nWähringer Strasse 29, AT - 1090 Vienna, Austria\[email protected]\nIn this issue of the distributed computing column, Alex Auvolat, Davide Frey,\nMichel Raynal, and Fran o ¸is Taïani revisit the basic problem of how to reliably\ntransfer money. Interestingly, the authors show that a simple algorithm is su \u000ecient\nto solve this problem, even in the presence of Byzantine processes.\nI would like to point out that this issue of the EATCS Bulletin (but in a di \u000ber-\nent section) further includes a summary of the PODC /DISC conference models\nproposed by the task force commissioned at the PODC 2020 business meeting, and\npresents and discusses the survey results. I hope it will be helpful and can serve as\na basis for further discussions on this topic. Note that this second article appears in\na dedicated section of the Bulletin, together with related articles.\nI would like to thank Alex and his co-authors as well as the PODC /DISC task\nforce for their contribution to the EATCS Bulletin. Special thanks go to everyone\nwho contributed to the conference model survey, also at the PODC business meeting\nand via Zulip.\nEnjoy the new distributed computing column!\nMoney Transfer Made Simple:\na Specification,\na Generic Algorithm, and its Proof\nAlex Auvolat\u0005;y, Davide Freyy, Michel Raynaly;?, François Taïaniy\n\u0005École Normale Supérieure, Paris, France\nyUniv Rennes, Inria, CNRS, IRISA, 35000 Rennes, France\n?Department of Computing, Polytechnic University, Hong Kong\nAbstract\nIt has recently been shown that, contrarily to a common belief, money\ntransfer in the presence of faulty (Byzantine) processes does not require\nstrong agreement such as consensus. This article goes one step further:\nnamely, it first proposes a non-sequential specification of the money-transfer\nobject, and then presents a generic algorithm based on a simple FIFO order\nbetween each pair of processes that implements it. The genericity dimension\nlies in the underlying reliable broadcast abstraction which must be suited to\nthe appropriate failure model. Interestingly, whatever the failure model, the\nmoney transfer algorithm only requires adding a single sequence number to\nits messages as control information. Moreover, as a side effect of the pro-\nposed algorithm, it follows that money transfer is a weaker problem than the\nconstruction of a safe/regular/atomic read/write register in the asynchronous\nmessage-passing crash-prone model.\nKeywords : Asynchronous message-passing system, Byzantine process, Dis-\ntributed computing, Efficiency, Fault tolerance, FIFO message order, Mod-\nularity, Money transfer, Process crash, Reliable broadcast, Simplicity.\n1 Introduction\nShort historical perspective Like field-area or interest-rate computations, money\ntransfers have had a long history (see e.g., [21, 27]). Roughly speaking, when\nlooking at money transfer in today’s digital era, the issue consists in building a\nsoftware object that associates an account with each user and provides two oper-\nations, one that allows a process to transfer money from one account to another\nand one that allows a process to read the current value of an account.\nThe main issue of money transfer lies in the fact that the transfer of an amount\nof money vby a user to another user is conditioned to the current value of the\nformer user’s account being at least v. A violation of this condition can lead\nto the problem of double spending (i.e., the use of the same money more than\nonce), which occurs in the presence of dishonest processes. Another important\nissue of money transfer resides in the privacy associated with money accounts.\nThis means that a full solution to money transfer must address two orthogonal\nissues: synchronization (to guarantee the consistency of the money accounts) and\nconfidentiality/security (usually solved with cryptography techniques). Here, like\nin closely related work [14], we focus on synchronization.\nFully decentralized electronic money transfer was introduced in [25] with the\nBitcoin cryptocurrency in which there is no central authority that controls the\nmoney exchanges issued by users. From a software point of view, Bitcoin adopts\na peer-to-peer approach, while from an application point of view it seems to have\nbeen motivated by the 2008 subprime crisis [32].\nTo attain its goal Bitcoin introduced a specific underlying distributed software\ntechnology called blockchain , which can be seen as a specific distributed state-\nmachine-replication technique, the aim of which is to provide its users with an\nobject known as a concurrent ledger . Such an object is defined by two operations,\none that appends a new item in such a way that, once added, the item cannot be re-\nmoved, and a second operation that atomically reads the full list of items currently\nappended. Hence, a ledger builds a total order on the invocations of its operations.\nWhen looking at the synchronization power provided by a ledger in the presence\nof failures, measured with the consensus-number lens, it has been shown that the\nsynchronization power of a ledger is +1[13, 30]. In a very interesting way, re-\ncent work [14] has shown that, in a context where each account has a single owner\nwho can spend the money currently in his/her account, the consensus number of\nthemoney-transfer concurrent object is 1. An owner is represented by a process\nin the following.\nThis is an important result, as it shows that the power of blockchain tech-\nnology is much stronger (and consequently more costly) than necessary to im-\nplement money transfer1. To illustrate this discrepancy, considering a sequential\nspecification of the money transfer object, the authors of [14] show first that, in\na failure-prone shared-memory system, money transfer can be implemented on\ntop of a snapshot object [1] (whose consensus number is 1, and consequently\n1As far as we know, the fact that consensus is not necessary to implement money transfer was\nstated for the first time in [15].\ncan be implemented on top of read/write atomic registers). Then, they appropri-\nately modify their shared-memory algorithm to obtain an algorithm that works\nin asynchronous failure-prone message-passing systems. To allow the processes\nto correctly validate the money transfers, the resulting algorithm demands them\nto capture the causality relation linking money transfers and requires each mes-\nsage to carry control information encoding the causal past of the money transfer\nit carries.\nContent of the article The present article goes even further. It first presents a\nnon-sequential specification of the money transfer object2, and then shows that,\ncontrarily to what is currently accepted, the implementation of a money transfer\nobject does not require the explicit capture of the causality relation linking individ-\nual money transfers. To this end, we present a surprisingly simple yet efficient and\ngeneric money-transfer algorithm that relies on an underlying reliable-broadcast\nabstraction. It is efficient as it only requires a very small amount of meta-data in\nits messages: in addition to money-transfer data, the only control information car-\nried by the messages of our algorithm is reduced to a single sequence number. It is\ngeneric in the sense that it can accommodate different failure models with no mod-\nification . More precisely, our algorithm inherits the fault-tolerance properties of\nits underlying reliable broadcast: it tolerates crashes if used with a crash-tolerant\nreliable broadcast, and Byzantine faults if used with a Byzantine-tolerant reliable\nbroadcast.\nGiven an n-process system where at most tprocesses can be faulty, the pro-\nposed algorithm works for t<nin the crash failure model, and t<n=3in the\nByzantine failure model. This has an interesting side effect on the distributed\ncomputability side. Namely, in the crash failure model, money transfer consti-\ntutes a weaker problem than the construction of a safe/regular/atomic read/write\nregister (where “weaker” means that—unlike a read/write register—it does not\nrequire the “majority of non-faulty processes” assumption).\nRoadmap The article consists of 7 sections. First, Section 2 introduces the dis-\ntributed failure-prone computing models in which we are interested, and Section 3\nprovides a definition of money transfer suited to these computing models. Then,\nSection 4 presents a very simple generic money-transfer algorithm. Its instanti-\nations and the associated proofs are presented in Section 5 for the crash failure\nmodel and in Section 6 for the Byzantine failure model. Finally, Section 7 con-\ncludes the article.3\n2To our knowledge, this is the first non-sequential specification of the money transfer object\nproposed so far.\n3Let us note that similar ideas have been developed concomitantly and independently in [10],\nwhich presents a money transfer system and its experimental evaluation.\n2 Distributed Computing Models\n2.1 Process failure model\nProcess model The system comprises a set of nsequential asynchronous pro-\ncesses, denoted p1, ...,pn4. Sequential means that a process invokes one operation\nat a time, and asynchronous means that each process proceeds at its own speed,\nwhich can vary arbitrarily and always remains unknown to the other processes.\nTwo process failure models are considered. The model parameter tdenotes\nan upper bound on the number of processes that can be faulty in the considered\nmodel. Given an execution r(run) a process that commits failures in ris said to\nbe faulty in r, otherwise it is non-faulty (or correct) in r.\nCrash failure model In this model, processes may crash. A crash is a premature\ndefinitive halt. This means that, in the crash failure model, a process behaves\ncorrectly (i.e., executes its algorithm) until it possibly crashes. This model is\ndenoted CAMP n;t[;](Crash Asynchronous Message Passing ). When tis restricted\nnot to bypass a bound f(n), the corresponding restricted failure model is denoted\nCAMP n;t[t\u0014f(n)].\nByzantine failure model In this model, processes can commit Byzantine fail-\nures [23, 28], and those that do so are said to be Byzantine. A Byzantine failure\noccurs when a process does not follow its algorithm. Hence a Byzantine process\ncan stop prematurely, send erroneous messages, send different messages to dis-\ntinct processes when it is assumed to send the same message, etc. Let us also\nobserve that, while a Byzantine process can invoke an operation which generates\napplication messages5it can also “simulate” this operation by sending fake im-\nplementation messages that give their receivers the illusion that they have been\ngenerated by a correct sender. However, we assume that there is no Sybil attack\nlike most previous work on byzantine fault tolerance including [14].6\nAs previously, the notations BAMP n;t[;]andBAMP n;t[t\u0014f(n)](Byzantine\nAsynchronous Message Passing ) are used to refer to the corresponding Byzantine\nfailure models.\n4Hence the system we consider is static (according to the distributed computing community\nparlance) or permissioned (according to the blockchain community parlance).\n5Anapplication message is a message sent at the application level, while an implementation is\nlow level message used to ensure the correct delivery of an application message.\n6As an example, a Byzantine process can neither spawn new identities, nor assume the identity\nof existing processes.\n2.2 Underlying complete point-to-point network\nThe set of processes communicate through an underlying message-passing point-\nto-point network in which there exists a bidirectional channel between any pair\nof processes. Hence, when a process receives a message, it knows which process\nsent this message. For simplicity, in writing the algorithms, we assume that a\nprocess can send messages to itself.\nEach channel is reliable and asynchronous. Reliable means that a channel\ndoes not lose, duplicate, or corrupt messages. Asynchronous means that the tran-\nsit delay of each message is finite but arbitrary. Moreover, in the case of the\nByzantine failure model, a Byzantine process can read the content of the mes-\nsages exchanged through the channels, but cannot modify their content.\nTo make our algorithm as generic and simple as possible, Section 4 does not\npresent it in terms of low-level send/receive operations7but in terms of a high-\nlevel communication abstraction, called reliable broadcast (e.g., [7, 9, 16, 19,\n30]). The definition of this communication abstraction appears in Section 5 for the\ncrash failure model and Section 6 for the Byzantine failure model. It is important\nto note that the previously cited reliable broadcast algorithms do not use sequence\nnumbers. They only use different types of implementation messages which can\nbe encoded with two bits.\n3 Money Transfer: a Formal Definition\nMoney transfer: operations From an abstract point of view, a money-transfer\nobject can be seen as an abstract array ACCOUNT [1::n]where ACCOUNT [i]rep-\nresents the current value of pi’s account. This object provides the processes with\ntwo operations denoted balance ()and transfer (), whose semantics are defined\nbelow. The transfer by a process of the amount of money vto a process pjis\nrepresented by the pair hj;vi. Without loss of generality, we assume that a process\ndoes not transfer money to itself. It is assumed that each ACCOUNT [i]is initial-\nized to a non-negative value denoted init [i]. It is assumed the array init [1::n]\nis initially known by all the processes.8\nInformally, when piinvokes balance (j)it obtains a value (as defined below)\nofACCOUNT [j], and when it invokes the transfer hj;vi, the amount of money\nvis moved from ACCOUNT [i]toACCOUNT [j]. If the transfer succeeds, the\noperation returns commit , if it fails it returns abort .\n7Actually the send and receive operations can be seen as “machine-level” instructions provided\nby the network.\n8It is possible to initialize some accounts to negative values. In this case, we must assume\npos>neg, where pos(resp., neg) is the sum of all the positive (resp., negative) initial values.\nHistories The following notations and definitions are inspired from [2].\n\u000fA local execution history (or local history) of a process pi, denoted Li, is a\nsequence of operations balance ()andtransfer ()issued by pi. If an opera-\ntion op1precedes an operation op2inLi, we say that “ op1precedes op2in\nprocess order”, which is denoted op1!iop2.\n\u000fAn execution history (or history) His a set of nlocal histories, one per\nprocess, H=(L1;\u0001\u0001\u0001;Ln).\n\u000fA serialization Sof a history His a sequence that contains all the operations\nofHand respects the process order !iof each process pi.\n\u000fGiven a history Hand a process pi, letAi;T(H)denote the history (L0\n1;:::;L0\nn)\nsuch that\n–L0\ni=Li, and\n–For any j,i:L0\njcontains only the transfer operations of pj.\nNotations\n\u000fAn operation transfer (j;v)invoked by piis denoted trfi(j;v).\n\u000fAn invocation of balance (j)that returns the value vis denoted blc(j)=v.\n\u000fLetHbe a set of operations.\n–plus(j;H)= \u0006 trfk(j;v)2Hv(total of the money given to pjinH).\n–minus (j;H)= \u0006 trfj(k;v)2Hv(total of the money given by pjinH).\n–acc(j;H)=init [j]+plus(j;H)\u0000minus (j;H)(value of ACCOUNT [j]\naccording to H).\n\u000fGiven a history Hand a process pi, let Sibe a serialization of Ai;T(H)\n(hence, Sirespects the nprocess orders defined by H). Let!Sidenote\nthe total order defined by Si.\nMoney-transfer-compliant serialization A serialization SiofAi;T(H)is money-\ntransfer compliant (MT-compliant) if:\n\u000fFor any operation trfj(k;v)2Si, we have\nv\u0014acc(j;fop2Sijop!Sitrfj(k;v)g), and\n\u000fFor any operation blc(j)=v2Si, we have\nv=acc(j;fop2Sijop!Siblc(j)=vg).\nMT-compliance is the key concept at the basis of the definition of a money-transfer\nobject. It states that it is possible to associate each process piwith a total order Si\nin which (a) each of its invocations of balance (j)returns a value vequal to pj’s\naccount’s current value according to Si, and (b) processes transfer only money\nthat they have.\nLet us observe that the common point among the serializations S1, ...,Snlies\nin the fact that each process sees all the transfer operations of any other process pj\nin the order they have been produced (as defined by Lj), and sees its own transfer\nand balance operations in the order it produced them (as defined by Li).\nMoney transfer in CAMP n;t[;]Considering the CAMP n;t[;]model, a money-\ntransfer object is an object that provides the processes with balance ()andtransfer ()\noperations and is such that, for each of its executions, represented by the corre-\nsponding history H, we have:\n\u000fAll the operations invoked by correct processes terminate.\n\u000fFor any correct process pi, there is an MT-compliant serialization Siof\nAi;T(H), and\n\u000fFor any faulty process pi, there is a history H0=(L0\n1;:::;L0\nn)where (a) L0\njis\na prefix of Ljfor any j,i, and (b) L0\ni=Li, and there is an MT-compliant\nserialization of Ai;T(H0).\nAn algorithm implementing a money transfer object is correct in CAMP n;t[;]if\nit produces only executions as defined above. We then say that the algorithm is\nMT-compliant.\nMoney transfer in BAMP n;t[;]The main differences between money transfer in\nCAMP n;t[;]andBAMP n;t[;]lies in the fact that a faulty process can try to transfer\nmoney it does not have, and try to present different behaviors with respect to\ndifferent correct processes. This means that, while the notion of a local history Li\nis still meaningful for a non-Byzantine process, it is not for a Byzantine process.\nFor a Byzantine process, we therefore define a mock local history for a process pi\nas any sequence of transfer operations from pi’s account9. In this definition, the\nmock local history Liassociated with a Byzantine process piis not necessarily the\nlocal history it produced, it is only a history that it could have produced from the\npoint of view of the correct processes. The correct processes implement a money-\ntransfer object if they all behave in a manner consistent with the same set of mock\nlocal histories for the Byzantine processes. More precisely, we define a mock\nhistory associated with an execution on a money transfer object in BAMP n;t[;]as\n˜H=(˜L1;:::;˜Ln)where:\n˜Lj=8>><>>:Ljifpjis correct,\namock local history ifpjis Byzantine.\n9Let us remind that the operations balance ()issued by a Byzantine can return any value. So\nthey are not considered in the mock histories associated with Byzantine processes.\nConsidering the BAMP n;t[;]model, a money transfer object is such that, for each\nof its executions, there exists a mock history ˜Hsuch that for any correct process pi,\nthere is an MT-compliant serialization SiofAi;T(˜H). An algorithm implementing\nsuch executions is said to be MT-compliant.\nConcurrent vs sequential specification Let us notice that the previous spec-\nification considers money transfer as a concurrent object. More precisely and\ndifferently from previous specifications of the money transfer object, it does not\nconsider it as a sequential object for which processes must agree on the very\nsame total order on the operations they issue [17]. As a simple example, let us\nconsider two processes piandpjthat independently issue the transfers trfi(k;v)\nandtrfj(k;v0)respectively. The proposed specification allows these transfers (and\nmany others) to be seen in different order by different processes. As far as we\nknow, this is the first specification of money transfer as a non-sequential object.\n4 A Simple Generic Money Transfer Algorithm\nThis section presents a generic algorithm implementing a money transfer object.\nAs already said, its generic dimension lies in the underlying reliable broadcast\nabstraction used to disseminate money transfers to the processes, which depends\non the failure model.\n4.1 Reliable broadcast\nReliable broadcast provides two operations denoted r_broadcast ()andr_deliver ().\nBecause a process is assumed to invoke reliable broadcast each time it issues a\nmoney transfer, we use a multi-shot reliable broadcast, that relies on explicit se-\nquence numbers to distinguish between its different instances (more on this be-\nlow). Following the parlance of [16] we use the following terminology: when a\nprocess invokes r_broadcast (sn;m), we say it “r-broadcasts the message mwith\nsequence number sn”, and when its invocation of r_deliver ()returns it a pair\n(sn;m), we say it “r-delivers mwith sequence number sn”. While definitions of re-\nliable broadcast suited to the crash failure model and the Byzantine failure model\nwill be given in Section 5 and Section 6, respectively, we state their common\nproperties below.\n\u000fValidity. This property states that there is no message creation. To this end,\nit relates the outputs (r-deliveries) to the inputs (r-broadcasts). Excluding\nmalicious behaviors, a message that is r-delivered has been r-broadcast.\n\u000fIntegrity. This property states that there is no message duplication.\n\u000fTermination-1. This property states that correct processes r-deliver what\nthey broadcast.\n\u000fTermination-2. This property relates the sets of messages r-delivered by\ndifferent processes.\nThe Termination properties ensure that all the correct processes r-deliver the same\nset of messages, and that this set includes at least all the messages that they r-\nbroadcast.\nAs mentioned above, sequence numbers are used to identify different instances\nof the reliable broadcast. Instead of using an underlying FIFO-reliable broadcast\nin which sequence numbers would be hidden, we expose them in the input/output\nparameters of the r_broadcast ()and r_deliver ()operations, and handle their up-\ndates explicitly in our generic algorithm. This reification10allows us to capture\nexplicitly the complete control related to message r-deliveries required by our al-\ngorithm. As we will see, it follows that the instantiations of the previous Integrity\nproperty (crash and Byzantine models) will explicitly refer to “upper layer” se-\nquence numbers.\nWe insist on the fact that the reliable broadcast abstraction that the proposed\nalgorithm depends on does not itself provide the FIFO ordering guarantee. It only\nuses sequence numbers to identify the different messages sent by a process. As\nexplained in the next section, the proposed generic algorithm implements itself\nthe required FIFO ordering property.\n4.2 Generic money transfer algorithm: local data structures\nAs said in the previous section, init [1::n]is an array of constants, known by all\nthe processes, such that init [k]is the initial value of pk’s account, and a transfer\nof the quantity vfrom a process pito a process pkis represented by the pair hk;vi.\nEach process pimanages the following local variables:\n\u000fsni: integer variable, initialized to 0, used to generate the sequence numbers\nassociated with the transfers issued by pi(it is important to notice that the\npoint-to-point FIFO order realized with the sequence numbers is the only\n“causality-related” control information used in the algorithm).\n\u000fdeli[1::n]: array initialized to [0;\u0001\u0001\u0001;0]such that deli[j]is the sequence\nnumber of the last transfer issued by pjand locally processed by pi.\n\u000faccount i[1::n]: array, initialized to init [1::n], that is a local approximate\nrepresentation of the abstract array ACCOUNT [1::n], i.e., account i[j]is the\nvalue of pj’s account, as known by pi.\n10Reification is the process by which an implicit, hidden or internal information is explicitly\nexposed to a programmer.\nWhile other local variables containing bookkeeping information can be added\naccording to the application’s needs, it is important to insist on the fact that the\nproposed algorithm needs only the three previous local variables (i.e., (2n+1)local\nregisters) to solve the synchronization issues that arise in fault-tolerant money\ntransfer.\n4.3 Generic money transfer algorithm: behavior of a process pi\nAlgorithm 1 describes the behavior of a process pi. When it invokes balance i(j),\npireturns the current value of account i[j](line 1).\ninit:account i[1::n] init [1::n];sni 0;deli[1::n] [0;\u0001\u0001\u0001;0].\noperation balance (j)is\n(1) return (account [j]).\noperation transfer (j;v)is\n(2) if(v\u0014account i[i])\n(3) then sni sni+1;r_broadcast (sni,TRANSFERhj;vi);\n(4) wait(deli[i]=sni);return (commit )\n(5) else return (abort )\n(6) end if .\nwhen (sn;TRANSFERhk;vi)isr_delivered from pjdo\n(7) wait\u0000(sn=deli[j]+1)^(account i[j]\u0015v)\u0001;\n(8) account i[j] account i[j]\u0000v;account i[k] account i[k]+v;\n(9) deli[j] sn.\nAlgorithm 1: Generic broadcast-based money transfer algorithm (code for pi)\nWhen it invokes transfer (j;v),pifirst checks if it has enough money in its\naccount (line 2) and returns abort if it does not (line 5). If process pihas enough\nmoney, it computes the next sequence number sniand r-broadcasts the pair (sni;\nTRANSFERhj;vi)(line 3). Then piwaits until it has locally processed this transfer\n(lines 7-9), and finally returns commit . Let us notice that the predicate at line 7 is\nalways satisfied when pir-delivers a transfer message it has r-broadcast.\nWhen pir-delivers a pair (sn;TRANSFERhk;vi)from a process pj, it does not\nprocess it immediately. Instead, piwaits until (i) this is the next message it has\nto process from pj(to implement FIFO ordering) and (ii) its local view of the\nmoney transfers to and from pj(namely the current value of account i[j]) allows\nthis money transfer to occur (line 7). When this happens, pilocally registers the\ntransfer by moving the quantity vfrom account i[j]toaccount i[k](line 8) and\nincreases deli[j](line 9).\n5 Crash Failure Model: Instantiation and Proof\nThis section presents first the crash-tolerant reliable broadcast abstraction whose\noperations instantiate the r_broadcast ()and r_deliver ()operations used in the\ngeneric algorithm. Then, using the MT-compliance notion, it proves that Algo-\nrithm 1 combined with a crash-tolerant reliable broadcast implements a money\ntransfer object in CAMP n;t[;]. It also shows that, in this model, money transfer is\nweaker than the construction of an atomic read/write register. Finally, it presents\na simple weakening of the FIFO requirement that works in the CAMP n;t[;]model.\n5.1 Multi-shot reliable broadcast abstraction in CAMP n;t[;]\nThis communication abstraction, named CR-Broadcast, is defined by the two op-\nerations cr_broadcast ()andcr_deliver (). Hence, we use the terminology “to cr-\nbroadcast a message”, and “to cr-deliver a message”.\n\u000fCRB-Validity. If a process picr-delivers a message with sequence number\nsnfrom a process pj, then pjcr-broadcast it with sequence number sn.\n\u000fCRB-Integrity. For each sequence number snand sender pja process pi\ncr-delivers at most one message with sequence number snfrom pj.\n\u000fCRB-Termination-1. If a correct process cr-broadcasts a message, it cr-\ndelivers it.\n\u000fCRB-Termination-2. If a process cr-delivers a message from a (correct or\nfaulty) process pj, then all correct processes cr-deliver it.\nCRB-Termination-1 and CRB-Termination-2 capture the “strong” reliability prop-\nerty of CR-Broadcast, namely: all the correct processes cr-deliver the same set S\nof messages, and this set includes at least the messages they cr-broadcast. More-\nover, a faulty process cr-delivers a subset of S. Algorithms implementing the\nCR-Broadcast abstraction in CAMP n;t[;]are described in [16, 30].\n5.2 Proof of the algorithm in CAMP n;t[;]\nLemma 1. Any invocation of balance ()ortransfer ()issued by a correct process\nterminates.\nProof The fact that any invocation of balance ()terminates follows immediately\nfrom the code of the operation.\nWhen a process piinvokes transfer (j;v), it r-broadcasts a message and, due\nto the CRB-Termination properties, pireceives its own transfer message and the\npredicate (line 7) is necessarily satisfied. This is because (i) only pican transfer\nits own money, (ii) the wait statement of line 4 ensures the current invocation\noftransfer (j;v)does not return until the corresponding TRANSFER message is\nprocessed at lines 8-9, and (iii) the fact that account i[i]cannot decrease between\nthe execution of line 3 and the one of line 7. It follows that piterminates its\ninvocation of transfer (j;v). \u0003Lemma 1\nThe safety proof is more involved. It consists in showing that any execution satis-\nfies MT-compliance as defined in Section 3.\nNotation and definition\n\u000fLettrfsn\nj(k;v)denote the operation trf(k;v)issued by pjwith sequence num-\nbersn.\n\u000fWe say a process piprocesses the transfer trfsn\nj(k;v)if, after it cr-delivered\nthe associated message TRANSFERhk;viwith sequence number sn,pjex-\nits the wait statement at line 7 and executes the associated statements at\nlines 8-9. The moment at which these lines are executed is referred to as the\nmoment when the transfer is processed bypi. (These notions are related to\nthe progress of processes.)\n\u000fIf the message TRANSFER cr-broadcast by a process is cr-delivered by a\ncorrect process, we say that the transfer is successful . (Let us notice that a\nmessage cr-broadcast by a correct process is always successful.)\nLemma 2. If a process piprocesses trfsn\n`(k;v), then any correct process pro-\ncesses it.\nProof Letm1;m2;:::be the sequence of transfers processed by piand let pjbe\na correct process. We show by induction on zthat, for all z,pjprocesses all the\nmessages m1;m2;:::;mz.\nBase case z=0. As the sequence of transfers is empty, the proposition is\ntrivially satisfied.\nInduction. Taking z\u00150, suppose pjprocessed all the transfers m1;m2;:::;mz.\nWe have to show that pjprocesses mz+1. Note that m1;m2;:::;mzdo not typically\noriginate from the same sender, and are therefore normally processed by pjin a\ndifferent order than pi, possibly mixed with other messages. This also applies to\nmz+1. Ifmz+1was processed by pjbefore mz, we are done. Otherwise there is a\ntime\u001cat which pjprocessed all the transfers m1;m2;:::;mz(case assumption),\ncr-delivered mz+1(CBR-Termination-2 property), but has not yet processed mz+1.\nLetmz+1=trfsn\n`(k;v). At time\u001c, we have the following.\n\u000fOn one side, del j[`]\u0014sn\u00001since messages are processed in FIFO order\nandmz+1has not yet been processed. On the other side, del j[`]\u0015sn\u00001\nbecause either sn=1ortrfsn\u00001\n`(\u0000;\u0000)2m1;:::;mz, where trfsn\u00001\n`(\u0000;\u0000)is the\ntransfer issued by p`just before mz+1=trfsn\n`(k;v)(otherwise piwould not\nhave processed mz+1just after m1;:::;mz). Thus del j[`]=sn\u00001.\n\u000fLet us now shown that, at time \u001c,account j[`]\u0015v. To this end let plusz+1\ni(`)\ndenote the money transferred to p`as seen by pijust before piprocesses\nmz+1, and minusz+1\ni(`)denote the money transferred from p`as seen by pi\njust before piprocesses mz+1. Similarly, let plusz+1\nj(`)denote the money\ntransferred to p`as seen by pjat time\u001candminusz+1\nj(`)denote the money\ntransferred from p`as seen by pjat time\u001c. Let us consider the following\nsums:\n–On the side of the money transferred to p`as seen by pj. Due to induc-\ntion, all the transfers to p`included in m1;m2;:::; mz(and possibly\nmore transfers to p`) have been processed by pj, thus plusz+1\nj(`)\u0015\n\u0006trfk0(`;w)2fm1;m2;:::;mzgwand, as piprocessed the messages in the order\nm1;:::;mz;mz+1(assumption), we have plusz+1\ni(`)= \u0006 trfk0(`;w)2fm1;m2;:::;mzgw.\nHence, plusz+1\nj(`)\u0015plusz+1\ni(`).\n–On the side of the money transferred from p`as seen by pj. Let\nus observe that pjhas processed all the transfers from p`with a se-\nquence number smaller than snand no transfer from p`with a se-\nquence number greater than or equal to sn, thus we have minusz+1\nj(`)=\n\u0006trf`(k0;w)2fm1;m2;:::;mzgw=minusz+1\ni(`).\nLetaccountz+1\ni[`]be the value of account i[`]just before piprocesses mz+1,\nandaccountz+1\nj[`]be the value of account j[`]at time\u001c. Asaccountz+1\nj[`]=\ninit [`]+plusz+1\nj(`)\u0000minusz+1\nj(`)andaccountz+1\ni[`]=init [`]+plusz+1\ni(`)\u0000\nminusz+1\ni(`), it follows that account j[`]is greater than or equal to the value\nofaccount i[`]just before piprocesses mz+1, which was itself greater than\nor equal to v(otherwise piwould not have processed mz+1at that time). It\nfollows that account j[`]\u0015v.\nThe two predicates of line 7 are therefore satisfied, and will remain so until mz+1\nis processed (due to the FIFO order on transfers issued by p`), thus ensuring that\nprocess pjprocesses the transfer mz+1. \u0003Lemma 2\nLemma 3. If a process piissues a successful money transfer trfsn\ni(k;v)(i.e., it cr-\nbroadcasts it in line 3), any correct process eventually cr-delivers and processes it.\nProof When process picr-broadcast money transfer trfsn\ni(k;v), the local predicate\n(sn=deli[i]+1)^(account i[i]\u0015v)was true at pi. When picr-delivers its own\ntransfer message, the predicate is still true at line 7 and piprocesses its transfer\n(ifpicrashes after having cr-broadcast the transfer and before processing it, we\nextend its execution—without loss of correctness—by assuming it crashed just\nafter processing the transfer). It follows from Lemma 2 that any correct process\nprocesses trfsn\ni(k;v). \u0003Lemma 3\nTheorem 1. Algorithm 1instantiated with CR-Broadcast implements a money\ntransfer object in the CAMP n;t[;]system model, and ensures that all operations\nby correct processes terminate.\nProof Lemma 1 proved that the invocations of the operations balance ()and\ntransfer ()by the correct processes terminate. Let us now consider MT-compliance.\nConsidering any execution of the algorithm, captured as history H=(L1;:::;Ln),\nlet us first consider a correct process pi. Let Sibe the sequence of the following\nevents happening at pi(these events are “instantaneous” in the sense piis not\ninterrupted when it produces each of them):\n\u000fthe event blc(j)=voccurs when piinvokes balance (j)and obtains v(line 1),\n\u000fand the event trfsn\nj(k;v)occurs when piprocesses the corresponding transfer\n(lines 8-9 executed without interruption).\nWe show that Siis an MT-compliant serialization of Ai;T(H). When considering\nthe construction of Si, we have the following:\n\u000fFor all trfsn\nj(k;v)2Ljwe have that pjcr-broadcast this transfer and that\n(sn;TRANSFERhk;vi)was received by pjand was therefore successful : it\nfollows from Lemma 3 that piprocesses this money transfer, and conse-\nquently we have trfsn\nj(k;v)2Si.\n\u000fFor all op1=trfsn\nj(k;v)andop2=trfsn0\nj(k0;v0)inSi(two transfers issued by\npj) such that op1!jop2, we have sn<sn0. Consequently piprocesses\nop1before op2, and we have op1!Siop2.\n\u000fFor all pairs op1and op2belonging to Li, their serialization order is the\nsame in LiandSi.\nIt follows that Siis a serialization of Ai;T(H). Let us now show that Siis MT-\ncompliant.\n\u000fCase where the event in Siistrfsn\nj(k;v). In this case we have v\u0014acc(j;fop2\nSijop!Sitrfj(k;v)gbecause this condition is directly encoded at piin the\nwaiting predicate that precedes the processing of op.\n\u000fCase where the event in Siisblc(j)=v. In this case we have v=acc(j;fop2\nSijop!Siblc(j)=vg, because this is exactly the way how the returned value\nvis computed in the algorithm.\nThis terminates the proof for the correct processes.\nFor a process pithat crashes, the sequence of money transfers from a process\npjthat is processed by piis a prefix of the sequence of money transfers issued by\npj(this follows from the FIFO processing order, line 7). Hence, for each process\npithat crashes there is a history H0=(L0\n1;:::;L0\nn)where L0\njis a prefix of Ljfor\neach j,iandL0\ni=Li, such that, following the same reasoning, the construction\nSigiven above is an MT-compliant serialization of Ai;T(H0), which concludes the\nproof of the theorem. \u0003Theorem 1\n5.3 Money transfer vs read/write registers in CAMP n;t[;]\nIt is shown in [5] that it is impossible to implement an atomic read/write register\nin the distributed system model CAMP n;t[;], i.e., when, in addition to asynchrony,\nany number of processes may crash. On the positive side, several algorithms\nimplementing such a register in CAMP n;t[t<n=2]have been proposed, each with\nits own features (see for example [4, 5, 24] to cite a few). An atomic read/write\nregister can be built from safe or regular registers11[22, 29, 33]. Hence, as atomic\nregisters, safe and regular registers cannot be built in CAMP n;t[;](although they\ncan in CAMP n;t[t<n=2]). As CAMP n;t[t<n=2]is a more constrained model\nthan CAMP n;t[;], it follows that, from a CAMP n;tcomputability point of view, the\nconstruction of a safe/regular/atomic read/write register is a stronger problem than\nmoney transfer.\n5.4 Replacing FIFO by a weaker ordering in CAMP n;t[;]\nAn interesting question is the following one: is FIFO ordering necessary to im-\nplement money transfer in the CAMP n;t[;]model? While we conjecture it is, it\nappears that, a small change in the specification of money transfer allows us to\nuse a weakened FIFO order, as shown below.\nWeakened money transfer specification The change in the specification pre-\nsented in Section 3 concerns the definition of the serialisation Siassociated with\neach process pi. In this modified version the serialization Siassociated with each\nprocess piis no longer required to respect the process order on the operations is-\nsued by pj,j,i. This means that two different process piandpkmay observe the\ntransfer ()operations issued by a process pjin different orders (which captures the\nfact that some transfer operations by a process pjare commutative with respect to\nits current account).\n11Safe and regular registers were introduced introduced in [22]. They have weaker specifications\nthan atomic registers.\nModification of the algorithm Letkbe a constant integer \u00151. Let sni(j)be\nthe highest sequence number such that all the transfer messages from pjwhose\nsequence numbers belong to f1;\u0001\u0001\u0001;:sni(j)ghave been cr-delivered and processed\nby a certain process pi(i.e., lines 8-9 have been executed for these messages).\nInitially we have sni(j)=0.\nLetsnbe the sequence number of a message cr-delivered by pifrom pj. At\nline 7 the predicate sn=deli[j]+1can be replaced by the predicate sn2fsni(j)+\n1;\u0001\u0001\u0001;sni(j)+kg. Let us notice that this predicate boils down to sn=deli[j]+1\nwhen k=1. More generally the set of sequence numbers fsni(j)+1;\u0001\u0001\u0001;sni(j)+kg\ndefines a sliding window for sequence numbers which allows the corresponding\nmessages to be processed.\nThe important point here is the fact that messages can be processed in an order\nthat does not respect their sending order as long as all the messages are processed,\nwhich is not guaranteed when k= +1. Assuming pjissues an infinite number of\ntransfers, if k= +1it is possible that, while all these messages are cr-delivered by\npi, some of them are never processed at lines 8-9 (their processing being always\ndelayed by other messages that arrive after them). The finiteness of the value k\nprevents this unfair message-processing order from occurring.\nThe proof of Section 5.2 must be appropriately adapted to show that this mod-\nification implements the weakened money-transfer specification.\n6 Byzantine Failure Model: Instantiation and Proof\nThis section presents first the reliable broadcast abstraction whose operations in-\nstantiate the r_broadcast ()and r_deliver ()operations used in the generic algo-\nrithm. Then, it proves that the resulting algorithm correctly implements a money\ntransfer object in BAMP n;t[t<n=3].\n6.1 Reliable broadcast abstraction in BAMP n;t[t<n=3]\nThe communication abstraction, denoted BR-Broadcast, was introduced in [7]. It\nis defined by two operations denoted br_broadcast ()andbr_deliver ()(hence we\nuse the terminology “br-broadcast a message” and “br-deliver a message”). The\ndifference between this communication abstraction and CR-Broadcast lies in the\nnature of failures. Namely, as a Byzantine process can behave arbitrarily, CRB-\nValidity, CRB-Integrity, and CRB-Termination-2 cannot be ensured. As an exam-\nple, it is not possible to ensure that if a Byzantine process br-delivers a message,\nall correct processes br-deliver it. BR-Broadcast is consequently defined by the\nfollowing properties. Termination-1 is the same in both communication abstrac-\ntions, while Integrity, Validity and Termination-2 consider only correct processes\n(the difference lies in the added constraint written in italics).\n\u000fBRB-Validity. If a correct process pibr-delivers a message from a correct\nprocess pjwith sequence number sn, then pjbr-broadcast it with sequence\nnumber sn.\n\u000fBRB-Integrity. For each sequence number snand sender pjacorrect pro-\ncess pibr-delivers at most one message with sequence number snfrom\nsender pj.\n\u000fBRB-Termination-1. If a correct process br-broadcasts a message, it br-\ndelivers it.\n\u000fBRB-Termination-2. If a correct process br-delivers a message from a (cor-\nrect or faulty) process pj, then all correct processes br-deliver it.\nIt is shown in [8, 30] that t<n=3is a necessary requirement to implement\nBR-Broadcast. Several algorithms implementing this abstraction have been pro-\nposed. Among them, the one presented in [7] is the most famous. It works in\ntheBAMP n;t[t<n=3]model, and requires three consecutive communication steps.\nThe one presented in [19] works in the more constrained BAMP n;t[t<n=5]model,\nbut needs only two consecutive communication steps. These algorithms show a\ntrade-off between optimal t-resilience and time-efficiency.\n6.2 Proof of the algorithm in BAMP n;t[t<n=3]\nThe proof has the same structure, and is nearly the same, as the one for the process-\ncrash model presented in Section 5.2.\nNotation and high-level intuition trfsn\nj(k;v)now denotes a money transfer (or\nthe associated processing event by a process) that correct processes br-deliver\nfrom pjwith sequence number sn. Ifpjis a correct process, this definition is the\nsame as the one used in the model CAMP n;t[;]. Ifpjis Byzantine, TRANSFER\nmessages from pjdo not necessarily correspond to actual transfer ()invocations\nbypj, but the BRB-Termination-2 property guarantees that all correct processes\nbr-deliver the same set of TRANSFER messages (with the same sequence num-\nbers), and therefore agree on how pj’s behavior should be interpreted. The reli-\nable broadcast thus ensures a form of weak agreement among correct processes in\nspite of Byzantine failures. This weak agreement is what allows us to move al-\nmost seamlessly from a crash-failure model to a Byzantine model, with no change\nto the algorithm, and only a limited adaptation of its proof.\nMore concretely, Lemma 2 (for crash failures) becomes the next lemma whose\nproof is the same as for Lemma 2 in which the reference to the CBR-Termination-\n2 property is replaced by a reference to its BRB counterpart.\nLemma 4. If acorrect process piprocesses trfsn\nj(k;v), then any correct process\nprocesses it.\nSimilarly, Lemma 3 turns into its Byzantine counterpart, lemma 5.\nLemma 5. If acorrect process pibr-broadcasts a money transfer trfsn\ni(k;v)(line 3),\nany correct processes eventually br-delivers and processes it.\nProof When a correct process pibr-broadcasts a money transfer trfsn\ni(k;v), we\nhave (sn=deli[i]+1)^(account i[i]\u0015v), thus when it br-delivers it the predicate\nof line 7 is satisfied. By Lemma 4, all the correct processes process this money\ntransfer. \u0003Lemma 5\nTheorem 2. Algorithm 1instantiated with BR-Broadcast implements a money\ntransfer object in the system BAMP n;t[t<n=3]model, and ensures that all opera-\ntions by correct processes terminate.\nThe model constraint t<n=3is due only to the fact that Algorithm 1 uses BR-\nbroadcast (for which t<n=3is both necessary and sufficient). As the invocations\nofbalance ()by Byzantine processes may return arbitrary values and do not im-\npact the correct processes, they are not required to appear in their local histories.\nProof The proof that the operations issued by the correct processes terminate is\nthe same as in Lemma 1 where the CRB-Termination properties are replaced by\ntheir BRB-Termination counterparts.\nTo prove MT-compliance, let us first construct mock local histories for Byzan-\ntine processes: the mock local history Liassociated with a Byzantine process pjis\nthe sequence of money transfers from pjthat the correct processes br-deliver from\npjand that they process. (By Lemma 4 all correct processes process the same set\nof money transfers from pj).\nLetpibe a correct process and Sibe the sequence of operations occurring at\npidefined in the same way as in the crash failure model. In this construction, the\nfollowing properties are respected:\n\u000fFor all, trfsn\nj(k;v)2Ljthen\n–ifpjis correct, it br-broadcast this money transfer and, due to Lemma 5,\npiprocesses it, hence trfsn\nj(k;v)2Si.\n–ifpjis Byzantine, due to the definition of Lj(sequence of money trans-\nfers that correct processes br-delivers from pjand process), we have\ntrfsn\nj(k;v)2Si.\n\u000fFor all op1=trfsn\nj(k;v)and op2=trfsn0\nj(k0;v0)(two transfers in Lj\u0012Si)\nsuch that op1!jop2, we have sn<sn0, consequently piprocesses op1\nbefore op2, and we have op1!Siop2.\n\u000fFor all both op1and op2belonging to Li, their serialization order is the\nsame in Lias in Si(same as for the crash case).\nIt follows that Siis a serialization of Ai;T(˜H)where ˜H=(L1;::;Ln),Libeing the\nsequence of its operations if piis correct, and a mock sequence of money transfers,\nif it is Byzantine. The same arguments that were used in the crash failure model\ncan be used here to prove that Siis MT-compliant. Since all correct processes\nobserve the same mock sequence of operations Ljfor any given Byzantine pro-\ncess pj, it follows that the algorithm implements an MT-compliant money transfer\nobject in BAMP n;t[t<n=3]. \u0003Theorem 2\n6.3 Extending to incomplete Byzantine networks\nAn algorithm is described in [31] which simulates a fully connected (point-to-\npoint) network on top of an asynchronous Byzantine message-passing system in\nwhich, while the underlying communication network is incomplete (not all the\npairs of processes are connected by a channel), it is (2t+1)-connected (i.e., any\npair of processes is connected by (2t+1)disjoint paths12). Moreover, it is shown\nthat this connectivity requirement is both necessary and sufficient.13\nHence, denoting BAMP n;t[t<n=3;(2t+1)-connected ]such a system model,\nthis algorithm builds BAMP n;t[t<n=3]on top BAMP n;t[t<n=3;(2t+1)-connected ]\n(both models have the same computability power). It follows that the previous\nmoney-transfer algorithm works in incomplete (2t+1)-connected asynchronous\nByzantine systems where t<n=3.\n7 Conclusion\nThe article has revisited the synchronization side of the money-transfer problem in\nfailure-prone asynchronous message-passing systems. It has presented a generic\nalgorithm that solves money transfer in asynchronous message-passing systems\nwhere processes may experience failures. This algorithm uses an underlying reli-\nable broadcast communication abstraction, which differs according to the type of\nfailures (process crashes or Byzantine behaviors) that processes can experience.\n12“Disjoint” means that, given any pair of processes pandq, any two paths connecting pand\nqshare no process other than pandq. Actually, the (2t+1)-connectivity is required only for any\npair of correct processes (which are not known in advance).\n13This algorithm is a simple extension to asynchronous systems of a result first established\nin [11] in the context of synchronous Byzantine systems.\nIn addition to its genericity (and modularity), the proposed algorithm is sur-\nprisingly simple14and particularly efficient (in addition to money-transfer data,\neach message generated by the algorithm only carries one sequence number). As\na side effect, this algorithm has shown that, in the crash failure model, money\ntransfer is a weaker problem than the construction of a read/write register. As far\nas the Byzantine failure model is concerned, we conjecture that t<n=3is a nec-\nessary requirement for money transfer (as it is for the construction of a read/write\nregister [18]).\nFinally, it is worth noticing that this article adds one more member to the fam-\nily of algorithms that strive to “unify” the crash failure model and the Byzantine\nfailure model as studied in [6, 12, 20, 26].\nAcknowledgments\nThis work was partially supported by the French ANR projects 16-CE40-0023-03\nDESCARTES, devoted to layered and modular structures in distributed comput-\ning, and ANR-16-CE25-0005-03 O’Browser, devoted to decentralized applica-\ntions on browsers.\nReferences\n[1] Afek Y ., Attiya H., Dolev D., Gafni E., Merritt M., and Shavit N., Atomic snapshots\nof shared memory. Journal of the ACM , 40(4):873-890 (1993)\n[2] Ahamad M., Neiger G., Burns J.E., Hutto P.W., and Kohli P., Causal memory: defi-\nnitions, implementation and programming. Distributed Computing , 9:37–49 (1995)\n[3] Aigner M. and Ziegler G., Proofs from THE BOOK (4th edition). Springer,\n274 pages, ISBN 978-3-642-00856-6 (2010)\n[4] Attiya H., Efficient and robust sharing of memory in message-passing systems. Jour-\nnal of Algorithms , 34(1):109-127 (2000)\n[5] Attiya H., Bar-Noy A., and Dolev D., Sharing memory robustly in message-passing\nsystems. Journal of the ACM , 42(1):121-132 (1995)\n[6] Bazzi, R. and Neiger, G.. Optimally simulating crash failures in a byzantine environ-\nment. Proc. 6th Workshop on Distributed Algorithms (WDAG’91) , Springer LNCS\n579, pp. 108–128 (1991)\n[7] Bracha G., Asynchronous Byzantine agreement protocols. Information & Computa-\ntion, 75(2):130-143 (1987)\n14Let us recall that, in sciences, simplicity is a first class property [3]. As stated by A. Perlis —\nrecipient of the first Turing Award — “Simplicity does not precede complexity, but follows it”.\n[8] Bracha G. and Toueg S., Asynchronous consensus and broadcast protocols. Journal\nof the ACM , 32(4):824-840 (1985)\n[9] Cachin Ch., Guerraoui R., and Rodrigues L., Reliable and secure distributed pro-\ngramming , Springer, 367 pages, ISBN 978-3-642-15259-7 (2011)\n[10] Collins D., Guerraoui R., Komatovic J., Monti M., Xygkis A., Pavlovic M.,\nKuznetsov P., Pignolet Y .-A., Seredinschi D.A., and Tonlikh A., Online payments\nby merely broadcasting messages. Proc. 50th IEEE/IFIP Int’l Conference on De-\npendable Systems and Networks (DSN’20) , 10 pages (2020)\n[11] Dolev D., The Byzantine general strike again. Journal of Algorithms , 3:14-30 (1982)\n[12] Dolev D. and Gafni E., Some garbage in - some garbage out: asynchronous t-\nByzantine as asynchronous benign t-resilient system with fixed t-Trojan horse in-\nputs. Tech Report , arXiv:1607.01210, 14 pages (2016)\n[13] Fernández Anta A., Konwar M.K., Georgiou Ch., and Nicolaou N.C., Formalizing\nand implementing distributed ledger objects, SIGACT News , 49(2):58-76 (2018)\n[14] Guerraoui R., Kuznetsov P., Monti M.,Pavlovic M., Seredinschi D.A., The con-\nsensus number of a cryptocurrency. Proc. 38th ACM Symposium on Principles of\nDistributed Computing (PODC’19) , ACM Press, pp. 307–316 (2019)\n[15] Gupta S., A non-consensus based decentralized financial transaction processing\nmodel with support for efficient auditing. Master Thesis , Arizona State University,\n83 pages (2016)\n[16] Hadzilacos V . and Toueg S., A modular approach to fault-tolerant broadcasts and\nrelated problems. Tech Report 94-1425 , 83 pages, Cornell University (1994)\n[17] Herlihy M.P. and Wing J.M, Linearizability: a correctness condition for concurrent\nobjects. ACM Transactions on Programming Languages and Systems , 12(3):463-\n492 (1990)\n[18] Imbs D., Rajsbaum S., Raynal M., and Stainer J., Read/write shared memory ab-\nstraction on top of an asynchronous Byzantine message-passing system. Journal of\nParallel and Distributed Computing , 93-94:1-9 (2016)\n[19] Imbs D. and Raynal M., Trading t-resilience for efficiency in asynchronous Byzan-\ntine reliable broadcast. Parallel Processing Letters , V ol. 26(4), 8 pages (2016)\n[20] Imbs D., Raynal M., and Stainer J., Are Byzantine failures really different from crash\nfailures? Proc. 30th Symposium on Distributed Computing (DISC’16) , Springer\nLNCS 9888, pp. 215-229 (2016)\n[21] Knuth D.E., Ancient Babylonian algorithms. Communications o of the ACM ,\n15(7):671-677 (1972)\n[22] Lamport L., On interprocess communication, Part I: basic formalism; Part II: algo-\nrithms. Distributed Computing , 1(2):77-101 (1986)\n[23] Lamport L., Shostack R., and Pease M., The Byzantine generals problem. ACM\nTransactions on Programming Languages and Systems , 4(3)-382-401 (1982)\n[24] Mostéfaoui A. and Raynal M., Two-bit messages are sufficient to implement atomic\nread/write registers in crash-prone systems. Proc. 35th ACM Symposium on Princi-\nples of Distributed Computing (PODC’16) , ACM Press, pp. 381-390 (2016)\n[25] Nakamoto S., Bitcoin: a peer-to-peer electronic cash system.\nhttps://bitcoin.org/bitcoin.pdf (2008) [last accessed March 31, 2020]\n[26] Neiger, G. and Toueg, S., Automatically increasing the fault-tolerance of distributed\nalgorithms. Journal of Algorithms ; 11(3): 374-419 (1990)\n[27] Neugebauer O., The exact sciences in antiquity . Brown University press, 240 pages\n(1957)\n[28] Pease M., Shostak R., and Lamport L., Reaching agreement in the presence of faults.\nJournal of the ACM , 27:228-234 (1980)\n[29] Raynal M., Concurrent programming: algorithms, principles and foundations .\nSpringer, 515 pages, ISBN 978-3-642-32026-2 (2013)\n[30] Raynal M., Fault-tolerant message-passing distributed systems: an algorithmic ap-\nproach. Springer , 550 pages, ISBN: 978-3-319-94140-0 (2018)\n[31] Raynal M., From incomplete to complete networks in asynchronous Byzantine sys-\ntems. Tech report , 10 pages (2020)\n[32] Riesen A., Satoshi Nakamoto and the financial crisis of 2008.\nhttps://andrewriesen.me/2017/12/18/2017-12-18-satoshi-nakamoto-and-the-\nfinancial-crisis-of-2008/ [last accessed April 22, 2020]\n[33] Taubenfeld G., Synchronization algorithms and concurrent programming . Pearson\nEducation/Prentice Hall, 423 pages, ISBN 0-131-97259-6 (2006).",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qJXXA0F5aj3",
"year": null,
"venue": "Bull. EATCS 2016",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/439/419",
"forum_link": "https://openreview.net/forum?id=qJXXA0F5aj3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Presburger Award for Young Scientists 2017 - Call for Nominations",
"authors": [
"Marta Kwiatkowska"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "ThePresburger Award\nforYoung Scientists 2017\nCall for Nominations\nDeadline : 31 D ecember 2016\nStarting in 2010, the European Association for Theoretical Computer Science\n(EATCS) established the Presburger Award. The Award is conferred annually at\nthe International Colloquium on Automata, Languages and Programming (ICALP)\nto a young scientist (in exceptional cases to several young scientists) for outstand-\ning contributions in theoretical computer science, documented by a published pa-\nper or a series of published papers.\nThe Award is named after Moj˙ zesz Presburger who accomplished his path-\nbreaking work on decidability of the theory of addition (which today is called\nPresburger arithmetic) as a student in 1929.\nNominations for the Presburger Award can be submitted by any member or\ngroup of members of the theoretical computer science community except the\nnominee and his /her advisors for the master thesis and the doctoral dissertation.\nNominated scientists have to be at most 35 years at the time of the deadline of\nnomination (i.e., for the Presburger Award of 2017 the date of birth should be\nin 1981 or later). The Presburger Award Committee of 2017 consists of Stephan\nKreutzer (TU Berlin), Marta Kwiatkowska (Oxford, chair) and Jukka Suomela\n(Aalto). Nominations, consisting of a two page justification and (links to) the re-\nspective papers, as well as additional supporting letters, should be sent by e-mail\nto:\nMarta Kwiatkowska\[email protected]\nThe subject line of every nomination should start with Presburger Award 2017 ,\nand the message must be received before December 31st, 2016 .\nThe award includes an amount of 1000 Euro and an invitation to ICALP 2017\nfor a lecture.\nPrevious Winners:\nMikołaj Boja ´nczyk, 2010 Patricia Bouyer-Decitre, 2011\nVenkatesan Guruswami, 2012 Mihai P ˘atra¸ scu, 2012\nErik Demaine, 2013 David Woodru \u000b, 2014\nXi Chen, 2015 Mark Braverman, 2016\nO\u000ecial website: http://www.eatcs.org/index.php/presburger",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xUsw0uhpuNI",
"year": null,
"venue": "Bull. EATCS 2021",
"pdf_link": "http://bulletin.eatcs.org/index.php/beatcs/article/download/657/712",
"forum_link": "https://openreview.net/forum?id=xUsw0uhpuNI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The EATCS Award 2021 - Laudatio for Toniann (Toni) Pitassi",
"authors": [
"Marta Kwiatkowska"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEATCS A ward 2021\nLaudatio for Toniann (Toni) Pitassi\nThe EATCS Award 2021 is awarded to\nToniann (Toni) Pitassi\nUniversity of Toronto, as the recipient of the 2021 EATCS Award for her fun-\ndamental and wide-ranging contributions to computational complexity, which in-\ncludes proving long-standing open problems, introducing new fundamental mod-\nels, developing novel techniques and establishing new connections between dif-\nferent areas. Her work is very broad and has relevance in computational learning\nand optimisation, verification and SAT-solving, circuit complexity and communi-\ncation complexity, and their applications.\nThe first notable contribution by Toni Pitassi was to develop lifting theorems:\na way to transfer lower bounds from the (much simpler) decision tree model for\nany function f, to a lower bound, the much harder communication complexity\nmodel, for a simply related (2-party) function f’. This has completely transformed\nour state of knowledge regarding two fundamental computational models, query\nalgorithms (decision trees) and communication complexity, as well as their rela-\ntionship and applicability to other areas of theoretical computer science. These\npowerful and flexible techniques resolved numerous open problems (e.g., the su-\nper quadratic gap between probabilistic and quantum communication complex-\nity), many of which were central challenges for decades.\nToni Pitassi has also had a remarkable impact in proof complexity. She in-\ntroduced the fundamental algebraic Nullstellensatz and Ideal proof systems, and\nthe geometric Stabbing Planes system. She gave the first nontrivial lower bounds\non such long-standing problems as weak pigeon-hole principle and models like\nconstant-depth Frege proof systems. She has developed new proof techniques for\nvirtually all proof systems, and new SAT algorithms. She found novel connections\nof proof complexity, computational learning theory, communication complexity,\ncircuit complexity, LP hierarchies, graph theory and more.\nIn the past few years Toni Pitassi has turned her attention to the field of algo-\nrithmic fairness, whose social importance is rapidly growing, in particular provid-\ning novel concepts and solutions based on causal modelling.\nSummarising, Toni Pitassi’s contributions have transformed the field of com-\nputational complexity and neighbouring areas of theoretical computer science,\nand will continue to have a lasting impact. Furthermore, she is an outstanding\nmentor, great teacher and a dedicated TCS community member.\nThe EATCS Award Committee 2021\nJohan Håstad\nMarta Kwiatkowska (chair)\nÉva Tardos\nThe EATCS Award is given to acknowledge extensive and widely recognized\ncontributions to theoretical computer science over a life-long scientific career.\nThe Award will be assigned during a ceremony that will take place during\nICALP 2021, where the recipient will give an invited presentation during the\nAward Ceremony.\nThe following is the list of the previous recipients of the EATCS Awards:\n2020 Mihalis Yannakakis 2009 Gérard Huet\n2019 Thomas Henzinger 2008 Leslie G. Valiant\n2018 Noam Nisan 2007 Dana S. Scott\n2017 Éva Tardos 2006 Mike Paterson\n2016 Dexter Kozen 2005 Robin Milner\n2015 Christos Papadimitriou 2004 Arto Salomaa\n2014 Gordon Plotkin 2003 Grzegorz Rozenberg\n2013 Martin Dyer 2002 Maurice Nivat\n2012 Moshe Y . Vardi 2001 Corrado Böhm\n2011 Boris (Boaz) Trakhtenbrot 2000 Richard Karp\n2010 Kurt Mehlhorn",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "mbgA0UkyMSN",
"year": null,
"venue": "Bull. EATCS 2019",
"pdf_link": "http://bulletin.eatcs.org/index.php/beatcs/article/download/597/606",
"forum_link": "https://openreview.net/forum?id=mbgA0UkyMSN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The EATCS Award 2020 - Call for Nominations",
"authors": [
"Artur Czumaj",
"Marta Kwiatkowska",
"Éva Tardos"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEATCS A ward 2020\nCall for Nominations\nDeadline : December 31, 2019\nThe European Association for Theoretical Computer Science (EATCS) an-\nnually honours a respected scientist from our community with the prestigious\nEATCS Distinguished Achievement Award. The award is given to acknowledge\nextensive and widely recognized contributions to theoretical computer science\nover a life long scientific career. For the EATCS Award 2020, candidates may\nbe nominated to the Award Committee consisting of\n\u000fArtur Czumaj (Chair)\n\u000fMarta Kwiatkowska and\n\u000fÉva Tardos\nNominations will be kept strictly confidential. They should include supporting\njustification and be sent by e-mail to the chair of the EATCS Award Committee:\nArtur Czumaj\[email protected]\nPrevious recipients of the EATCS Award are:\nR.M. Karp (2000) C. Böhm (2001) M. Nivat (2002)\nG. Rozenberg (2003) A. Salomaa (2004) R. Milner (2005)\nM. Paterson (2006) D.S. Scott (2007) L.G. Valiant (2008)\nG. Huet (2009) K. Mehlhorn (2010) B. Trakhtenbrot (2011)\nM.Y . Vardi (2012) M.E. Dyer (2013) G.D. Plotkin (2014)\nC. Papadimitriou (2015) D. Kozen(2016) É. Tardos(2017)\nN. Nisan (2018) T. Henzinger (2019)\nThe next award will be presented during ICALP 2020 in Beijing, China.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Ab76rwJqv16",
"year": null,
"venue": "Bull. EATCS 2020",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/637/652",
"forum_link": "https://openreview.net/forum?id=Ab76rwJqv16",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The EATCS Award 2021 - Call for Nominations",
"authors": [
"Marta Kwiatkowska",
"Éva Tardos",
"Johan Håstad"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEATCS A ward 2021\nCall for Nominations\nDeadline : December 31, 2020\nThe European Association for Theoretical Computer Science (EATCS) an-\nnually honours a respected scientist from our community with the prestigious\nEATCS Distinguished Achievement Award. The award is given to acknowledge\nextensive and widely recognized contributions to theoretical computer science\nover a life long scientific career. For the EATCS Award 2021, candidates may\nbe nominated to the Award Committee consisting of\nMarta Kwiatkowska (Chair)\nÉva Tardos and\nJohan Håstad\nNominations will be kept strictly confidential. They should include supporting\njustification and be sent by e-mail to the chair of the EATCS Award Committee:\nMarta Kwiatkowska\[email protected]\nPrevious recipients of the EATCS Award are:\nR.M. Karp (2000) C. Böhm (2001) M. Nivat (2002)\nG. Rozenberg (2003) A. Salomaa (2004) R. Milner (2005)\nM. Paterson (2006) D.S. Scott (2007) L.G. Valiant (2008)\nG. Huet (2009) K. Mehlhorn (2010) B. Trakhtenbrot (2011)\nM.Y . Vardi (2012) M.E. Dyer (2013) G.D. Plotkin (2014)\nC. Papadimitriou (2015) D. Kozen(2016) É. Tardos(2017)\nN. Nisan (2018) T. Henzinger (2019) Mihalis Yannakakis(2020)\nThe next award will be presented during ICALP 2021 in Glasgow, Scotland.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9-sCbQnv7Z",
"year": null,
"venue": "Bull. EATCS 2012",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/47/43",
"forum_link": "https://openreview.net/forum?id=9-sCbQnv7Z",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Presburger Award 2013: Call for Nominations",
"authors": [
"Monika Henzinger"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "ThePresburger Award\nforYoung Scientists 2013\nCall for Nominations\nDeadline: December 31st, 2012\nStarting in 2010, the European Association of Theoretical Computer Science es-\ntablished the Presburger A ward , conferred annually at the International Collo-\nquium on Automata, Languages and Programming (ICALP) to a young scientist(in exceptional cases to several young scientists) for outstanding contributions intheoretical computer science, documented by a published paper or a series of pub-lished papers.\nThe Award is named after Moj˙ zesz Presburger who accomplished his path-breaking\nwork on decidability of the theory of addition (which today is called Presburgerarithmetic) as a student in 1929.\nNominations for the Presburger Award can be submitted by any member or group\nof members of the theoretical computer science community except the nomineeand his/her advisors for the master thesis and the doctoral dissertation. Nominated\nscientists have to be at most 35 years at the time of the deadline of nomination (i.e.,\nfor the Presburger Award of 2013 the date of birth should be in 1977 or later).\nThe Presburger Award Committee of 2013 consists of Monika Henzinger (Vienna,\nchair), Antonin Kucera (Brno) and Peter Widmayer (Zurich) .\nNominations, consisting of a two page justification and (links to) the respective\npapers, as well as additional supporting letters (if any), should be sent to:\nThe Bulletin of the EATCS\n23Monika Henzinger\nUniversität WienResearch Group Theory and Applications of AlgorithmsUniversitätsstraße 10 /9, 1090 Wien\nor: [email protected]\nby31st December 2012 .\nPrevious recipients of the Presburger Award are\n•Venkatesan Guruswami and Mihai Patrascu, 2012.\n•Patricia Bouyer-Decitre, 2011.\n•Mikolaj Bojanczyk, 2010.\nThe award includes an amount of 1000 eand an invitation to ICALP 2013 for a\nlecture.\nThe Presburger Award is sponsored by BiCi, the Bertinoro international Center\nfor informatics.\nOfficial website: http://www.eatcs.org/index.php/presburger",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "QC9XOjsKph-",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/496/484",
"forum_link": "https://openreview.net/forum?id=QC9XOjsKph-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Algorithmic Foundations of Programmable Matter Dagstuhl Seminar 16271",
"authors": [
"Sándor P. Fekete",
"Andréa W. Richa",
"Kay Römer",
"Christian Scheideler"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Algorithmic Foundations of Programmable Matter\nDagstuhl Seminar 16271\nSándor P. Fekete, Andréa Richa, Kay Römer, and Christian Scheideler\nAbstract\nIn July 2016, the Dagstuhl Seminar 16271 considered “Algorithmic Foundations of\nProgrammable Matter”, a new and emerging field that combines theoretical work on algo-\nrithms with a wide spectrum of practical applications that reach all the way from small-scale\nembedded systems to cyber-physical structures at nano-scale.\nThe aim of the Dagstuhl seminar was to bring together researchers from the algorithms\ncommunity with selected experts from robotics and distributed systems in order to set a solid\nbase for the development of models, technical solutions, and algorithms that can control\nprogrammable matter. Both communities benefited from such a meeting for the following\nreasons.\n\u000fMeeting experts from other fields provided additional insights, challenges and focus\nwhen considering work on programmable matter.\n\u000fInteracting with colleagues in a close and social manner gave many starting points for\ncontinuing collaboration.\n\u000fGetting together in a strong, large and enthusiastic group provided the opportunity to\nplan a number of followup activities.\n1 Summary\nProgrammable matter refers to a substance that has the ability to change its physical properties \n(shape, density, moduli, conductivit y, optical properties, etc.) in a programmable fashion, based \nupon user input or autonomous sensing. The potential applications are endless, e.g., smart \nmaterials, autonomous monitoring and repair, or minimal invasive surger y. Thus, there is a \nhigh relevance of this topic to industry and society in general, and much research has been \ninvested in the past decade to fabricate programmable matter. However, fabrication is only part \nof the story: without a proper understanding of how to program that matter, complex tasks such \nas minimal invasive surgery will be out of reach. Unfortunately, only very few people in the \nalgorithms community have worked on programmable matter so far, so programmable matter \nhas not received the attention it deserves given the importance of that topic.\nThe Dagstuhl seminar “Algorithmic Foundations of Programmable Matter” aimed at resolv-\ning that problem by getting together a critical mass of people from algorithms with a selection of \nexperts from distributed systems and robotics in order to discuss and develop models, algorithms, \nand technical solutions for programmable matter.\nThe aim of the seminar was to bring together researchers from the algorithms community \nwith selected experts from robotics and distributed systems in order to set a solid base for the \ndevelopment of models, technical solutions, and algorithms that can control programmable \nmatter. The overall mix worked quite well: researchers from the more practical side (such as \nJulien Bourgeois, Nikolaus Correll, Ted Pavlic, Kay Römer, among others) interacted well with \nparticipants from the theoretical side (e.g., Jennifer Welch, Andrea Richa, Christian Scheideler, \nSándor Fekete, and many others). Particularly interesting to see were well-developed but still \nexpanding areas, such as tile self-assembly that already combines theory and practice (with \nvisible and well-connected scientists such as Damien Woods, Matt Patitz, David Doty, Andrew \nWinslo w, Robert Schweller) or multi-robot systems (Julien Bourgeois, Nikolaus Correll, Matteo \nLasagni, André Naz, Benoît Piranda, Kay Römer).\nThe seminar program started with a set of four tutorial talks given by representatives from \nthe di\u000berent sets of participants to establish a common ground for discussion. From the robotics \nand distributed system side, Nikolaus Correll and Julien Bourgeois gave tutorials on smart \nprogrammable materials and on the claytronics programmable matter framework, respectivel y. \nFrom the bioengineering side, Ted Pavlic gave a tutorial on natural systems that may inspire \nprogrammable matter. From the algorithmic side, Jacob Hendricks gave a tutorial on algorithmic \nself-assembly. In the mornings of the remaining four days, selected participants o\u000bered shorter \npresentations with a special focus on experience from the past work and especially also open \nproblems and challenges. Two of the afternoons were devoted to discussions in breakout groups. \nFour breakout groups were formed, each with less than 10 participants to allow for intense \ninteraction. Inspired by a classification of research questions in biology into “ why?” and \n“how?” questions presented in Ted Pavlic’s tutorial, the first breakout session was devoted to \nthe “why?” questions underpinning programmable matter, especially also appropriate models of \nprogrammable matter systems (both biological or engineered) suitable for algorithmic research. \nThe second breakout session towards the end of the seminar was devoted to a set of specific \nquestions given by the organizers that resulted from the discussions among the participants, \nthey included both research questions and organizational questions (e.g., how to proceed after \nthe Dagstuhl seminar). After each of the two breakout sessions, one participant of each of the \nfour breakout groups reported back the main findings of the discussions to the plenum, leading\nto further discussion among all participants. One of the afternoons was devoted to a hike to a\nnearby village, where the participants also visited a small museum devoted to programmable\nmechanical musical devices.\nThe seminar was an overwhelming success. In particular, bringing together participants\nfrom a number of di\u000berent but partially overlapping areas, in order to exchange problems and\nchallenges on a newly developing field turned out to be excellent for the setting of Dagstuhl -\nand the opportunities provided at Dagstuhl are perfect for starting a new community.\nParticipants were enthusiastic on a number of di\u000berent levels.\n\u000fMeeting experts from other fields provided additional insights, challenges and focus when\nconsidering work on programmable matter.\n\u000fInteracting with colleagues in a close and social manner gave many starting points for\ncontinuing collaboration.\n\u000fGetting together in a strong, large and enthusiastic group provided the opportunity to plan\na number of followup activities.\nThe latter include connecting participants via a mailing list, the planning and writing of survey\narticles in highly visible publication outlets, and a starting point for specific scientific workshops\nand conferences.\nParticipants were highly enthusiastic about the possibility of another Dagstuhl seminar in\nthe future; organizers are keeping the ball rolling on this, so it is quite possible that there will be\nmore to come.\n2 Overview of Talks\nAbstracts of all talks are available at\nhttp://drops.dagstuhl.de/opus/volltexte/2016/6759/.\nClaytronics: an Instance of Programmable Matter\nJulien Bourgeois (FEMTO-ST Institute - Montbéliard, FR)\nA Markov Chain Algorithm for Compression in Self-Organizing Particle Systems\nSarah Cannon (Georgia Institute of Technology - Atlanta, US)\nAlgorithm design for swarm robotics and smart materials\nNikolaus Correll (University of Colorado - Boulder, US)\nDynamic Networks of Computationally Challenged Devices: the Passive Case\nYuval Emek (Technion - Haifa, IL)\nAlgorithms for robot navigation: From optimizing individual robots to particle swarms\nSándor Fekete (TU Braunschweig, DE)\nThe Amoebot Model\nRobert Gmyr (Universität Paderborn, DE)\nDances with Plants: Robot-supported Programmable Living Matter\nHeiko Hamann (Universität Paderborn, DE)\nIntroduction to Modeling Algorithmic Self-Assembling Systems\nJacob Hendricks (University of Wisconsin - River Falls, US)\nAdvantages, Limitations, Challenges of Tendon-Driven Programmable Chains\nMatteo Lasagni (TU Graz, AT)\nProgrammable Matter for Dynamic Environments\nOthon Michail (CTI - Rion, GR)\nEnergy Harvesting in-vivo Nano-Robots in Caterpillar Swarm\nVenkateswarlu Muni (Ben Gurion University of the Negev - Beer Sheva, IL)\nAlgorithmic design of complex 3D DNA origami structures\nPekka Orponen (Aalto University, FI)\nAlgorithmic Foundations of Biological Matter: Faster, Cheaper, and More Out of Con-\ntrol\nTheodore P . Pavlic (Arizona State University - Tempe, US)\nVisibleSim:Your simulator for Programmable Matter\nBenoît Piranda (FEMTO-ST Institute - Montbéliard, FR)\nOn obliviousness\nNicola Santoro (Carleton University - Ottawa, CA)\nTheory and practice of large scale molecular-robotic reconfiguration\nDamien Woods (California Institute of Technology - Pasadena, US)\nDistributed coordination of mobile robots in 3D-space\nYukiko Yamauchi (Kyushu University - Fukuoka, JP)",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "c60AyeVUggh",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/469/455",
"forum_link": "https://openreview.net/forum?id=c60AyeVUggh",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The EATCS Award 2017 - Laudatio for Eva Tardos",
"authors": [
"Fedor V. Fomin",
"Christos H. Papadimitriou",
"Jean-Eric Pin"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEATCS A ward 2017\nLaudatio for ÈvaTardos\nThe EATCS Award 2017 is awarded to\nÈva Tardos\nfor her seminal contributions to many areas of Theoretical Computer Science.\nÈva Tardos is a leading researcher in the theory of computing whose deep con-\ntributions shaped the field of algorithms across three decades and through several\nof its major developments as a discipline. Her work is characterized by deep the-\noretical advances, the resolution of major challenges in the field, the shaping of\nnew research areas, and a track record of significant impact on key application\nareas.\nÈva Tardos is a leading researcher in the theory of computing whose deep con-\ntributions shaped the field of algorithms across three decades and through several\nof its major developments as a discipline. Her work is characterized by deep the-\noretical advances, the resolution of major challenges in the field, the shaping of\nnew research areas, and a track record of significant impact on key application\nareas.\nIn the early phase of Èva’s career, she emerged as the leading pioneer in bring-\ning techniques from discrete optimization to bear on the design of e \u000ecient algo-\nrithms. Her solo work on strongly polynomial algorithms was a breakthrough.\nFirst, it resolved a major open problem in the field, showing that minimum-cost\nflow problems - one of the basic problems in network flow - could be solved in\nstrongly polynomial time, with a running time depending only on the number of\nnodes and edges of the network, not on the magnitudes of its capacities or costs.\nShortly afterward, she went on to prove (with Andras Frank) an unanticipated ma-\njor generalization of this result, showing that all \"combinatorial\" linear programs\nhave strongly polynomial time algorithms. The arguments used to establish these\nresults were profound and aesthetically elegant, a mixture of novel combinatorial\ntechniques and an unexpected connection to the theory of Diophantine approxi-\nmation.\nOver the next decade, Èva’s played a pivotal role in establishing the modern\nuse of linear programming in algorithm design, and in shaping the basic architec-\nture of the area of approximation algorithms, one of the most influential research\nthemes in the field. For example, her work with Lenstra and Shmoys provided\none of the first examples of how a sophisticated rounding scheme for a linear pro-\ngramming relaxation could produce strong approximation guarantees; the road\nmap laid out in this paper has been followed literally hundreds of times in the three\ndecades since it appeared. In subsequent work, Èva’s developed approximation al-\ngorithms for fundamental problems in a very wide range of areas, including many\npractical problems in communication network design, facility location, routing,\nclustering, classification, and social network analysis. The extensive follow-up\nwork generated in each of these areas shows how her approaches have catalyzed\nnew research directions and the development of novel techniques.\nThis is already a track record of enormous impact, but it is still only part of the\npicture: beginning in the late 1990s, Èva emerged as one of the leaders in shaping\na completely new and broadly influential subfield of algorithms, the study of al-\ngorithmic game theory. Her results with Tim Roughgarden on the game-theoretic\nanalysis of network tra \u000ec laid the foundation of algorithmic game theory. In\nthis area, Èva has gone on to establish some of the field’s fundamental results\nin additional directions, including algorithmic mechanism design, game-theoretic\nnetwork design, and sponsored search market design.\nIn addition to the profound and influential research that has established Èva\nas a central figure in computer science research, she has also contributed enor-\nmously to the field as an educator. She has mentored a long sequence of students,\nco-authored a widely-used textbook on algorithms, and was co-editor of the cen-\ntral handbook in algorithmic game theory. She has also been a leader through her\nservice to the community including roles as editor-in-chief of major journals, pro-\ngram chair of major conferences, and membership on national advisory boards.\nÈva has long been one of the central figures setting the directions for theoret-\nical computer science. Her combination of long-term vision, creativity, and sheer\ntechnical strength has reshaped and rebuilt the foundations of algorithm design.\nFor all these reasons, the EATCS wants to celebrate Èva Tardos and her influential\nwork, and is honored to award her with its most prestigious prize.\nThe EATCS Award Committee 2017\n\u000fFedor V . Fomin (chair)\n\u000fChristos Papadimitriou\n\u000fJean-Eric Pin\nThe EATCS Award is given to acknowledge extensive and widely recognized\ncontributions to theoretical computer science over a life-long scientific career. The\nlist of the previous recipients of the EATCS Award is available at\nhttp://eatcs.org /index.php /eatcs-award.\nThe EATCS Award carries a prize money of 1000 Euros and will be presented\nat ICALP 2017, which will take place in Warsaw (Poland) from the 10th till the\n14th of July 2017.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TNXwxENXJuc",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": "http://bulletin.eatcs.org/index.php/beatcs/article/download/558/555",
"forum_link": "https://openreview.net/forum?id=TNXwxENXJuc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The EATCS Award 2019 - Call for Nominations",
"authors": [
"Christos H. Papadimitriou"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEATCS A ward 2019\nCall for Nominations\nDeadline : December 31, 2018\nThe European Association for Theoretical Computer Science (EATCS) an-\nnually honours a respected scientist from our community with the prestigious\nEATCS Distinguished Achievement Award. The award is given to acknowledge\nextensive and widely recognized contributions to theoretical computer science\nover a life long scientific career. For the EATCS Award 2019, candidates may\nbe nominated to the Award Committee consisting of\n\u000fChristos Papadimitriou (Chair),\n\u000fArtur Czumaj and\n\u000fMarta Kwiatkowska\nNominations will be kept strictly confidential. They should include supporting\njustification and be sent by e-mail to the chair of the EATCS Award Committee:\nChristos Papadimitriou\[email protected]\nPrevious recipients of the EATCS Award are:\nR.M. Karp (2000) C. Böhm (2001) M. Nivat (2002)\nG. Rozenberg (2003) A. Salomaa (2004) R. Milner (2005)\nM. Paterson (2006) D.S. Scott (2007) L.G. Valiant (2008)\nG. Huet (2009) K. Mehlhorn (2010) B. Trakhtenbrot (2011)\nM.Y . Vardi (2012) M.E. Dyer (2013) G.D. Plotkin (2014)\nC. Papadimitriou (2015) Dexter Kozen(2016) Éva Tardos(2017)\nN. Nisan (2018)\nThe next award will be presented during ICALP 2019 in Patras, Greece.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WZwxIMUFU5v",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/486/474",
"forum_link": "https://openreview.net/forum?id=WZwxIMUFU5v",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Twenty Lectures on Algorithmic Game Theory",
"authors": [
"Tim Roughgarden",
"Kazuo Iwama"
],
"abstract": "Computer science and economics have engaged in a productive conversation over the past 15 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to online advertising, fundamentally involve interactions between multiple self-interested parties. Economics and game theory oer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, as recent research in computer science complements the traditional economic literature in several ways. For example, computer science oers a focus on and a language to discuss computational complexity; has popularized the widespread use of approximation bounds to reason about models where exact solutions are unrealistic or unknowable; and proposes several alternatives to Bayesian or average-case analysis that encourage robust solutions to economic design problems. The standard reference in the field [3] is aimed at researchers rather than students and autodidacts, and it predates the many important results that have appeared over the past ten years",
"keywords": [],
"raw_extracted_content": "BookIntroduction by the Authors\nInvited by\nKazuoIwama\[email protected]\nBulletin Editor\nKyoto University, Japan\nTwenty Lectures on Algorithmic Game\nTheory\nTim Roughgarden\u0003\n1 Introduction\nComputer science and economics have engaged in a productive conversation over\nthe past 15 years, resulting in a new field called algorithmic game theory or alter-\nnatively economics and computation . Many problems central to modern computer\nscience, ranging from resource allocation in large networks to online advertising,\nfundamentally involve interactions between multiple self-interested parties. Eco-\nnomics and game theory o \u000ber a host of useful models and definitions to reason\nabout such problems. The flow of ideas also travels in the other direction, as recent\nresearch in computer science complements the traditional economic literature in\nseveral ways. For example, computer science o \u000bers a focus on and a language to\ndiscuss computational complexity; has popularized the widespread use of approx-\nimation bounds to reason about models where exact solutions are unrealistic or\nunknowable; and proposes several alternatives to Bayesian or average-case anal-\nysis that encourage robust solutions to economic design problems. The standard\nreference in the field [3] is aimed at researchers rather than students and autodi-\ndacts, and it predates the many important results that have appeared over the past\nten years.\nMy book Twenty Lectures on Algorithmic Game Theory [5] grew out of my\nlecture notes for a course that I taught at Stanford five times between 2004 and\n2013.1The course aims to give students a quick and accessible introduction to\nmany of the most important concepts in the field, with representative models and\nresults chosen to illustrate broader themes. This book has the same goal, and I\nhave stayed close to the structure and spirit of my classroom lectures. I assume\nno background in game theory or economics, nor can the book substitute for a\n\u0003Department of Computer Science, Stanford University, 474 Gates Building, 353 Serra Mall,\nStanford, CA 94305. Email: [email protected] . Research supported in part by NSF award\nCCF-1524062.\n1Seehttp://timroughgarden.org/notes.html for lecture notes for this and many other\ncourses.\ntraditional book on these subjects. My book is far from encyclopedic, but fortu-\nnately there are excellent existing books and books in preparation on many of the\nomitted topics [1, 2, 3, 4, 6].\n2 Brief Overview\nAfter the introductory lecture, the book is loosely organized into three parts. Lec-\ntures 2–10 cover several aspects of “mechanism design”—the science of rule-\nmaking. These lectures cover the Vickrey auction and the VCG mechanism,\nalgorithmic mechanism design, Myerson’s theory of revenue-maximizing auc-\ntions, and case studies in online advertising, wireless spectrum auctions, and kid-\nney exchange. Lectures 11–15 outline the theory of the “price of anarchy”—\napproximation guarantees for equilibria of games found “in the wild,” such as\nlarge networks with competing users. Specific topics include selfish routing, net-\nwork cost-sharing games and the price of stability, potential games, and smooth-\nness arguments. Finally, Lectures 16–20 describe positive and negative results for\nthe computation of equilibria, both by distributed learning algorithms and by com-\nputationally e \u000ecient centralized algorithms. These lectures discuss best-response\ndynamics, no-regret algorithms, and PLS - andPPAD -completeness.\n3 Top 10 List\nThe following “top 10 list” provides additional details about the book’s contents.\n1.The second-price single-item auction (Lecture 2). Our first example of an\n“ideal” auction, which is dominant-strategy incentive compatible (DSIC),\nwelfare maximizing, and computationally e \u000ecient. Single-item auctions\nalready show how small design changes, such as a first-price vs. a second-\nprice payment rule, can have major ramifications for participant behavior.\n2.Myerson’s lemma (Lectures 3–5). For single-parameter problems, DSIC\nmechanism design reduces to monotone allocation rule design. Applica-\ntions include ideal sponsored search auctions, polynomial-time approxi-\nmately optimal knapsack auctions, and the reduction of expected revenue\nmaximization with respect to a valuation distribution to expected virtual\nwelfare maximization.\n3.The Bulow-Klemperer theorem (Lecture 6). In a single-item auction, adding\nan extra bidder is as good as knowing the underlying distribution and run-\nning an optimal auction. This result, along with the prophet inequality, is\nan important clue that simple and prior-independent auctions can be almost\nas good as optimal ones.\n4.The VCG mechanism (Lecture 7–8). Charging participants their external-\nities yields a DSIC welfare-maximizing mechanism, even in very general\nsettings. The VCG mechanism is impractical in many real-world applica-\ntions, including wireless spectrum auctions, which motivates simpler and\nindirect auction formats like simultaneous ascending auctions.\n5.Mechanism design without money (Lectures 9–10). Many of the most ele-\ngant and widely deployed mechanisms do not use payments. Examples in-\nclude the Top Trading Cycle mechanism, mechanisms for kidney exchange,\nand the Gale-Shapley stable matching mechanism.\n6.Selfish routing (Lectures 11–12). Worst-case selfish routing networks are\nalways simple, with Pigou-like networks maximizing the price of anarchy\n(POA). The POA of selfish routing is therefore large only when network\ncost functions are highly nonlinear, corroborating empirical evidence that\nnetwork over-provisioning leads to good network performance.\n7.Robust POA Bounds (Lecture 14). All of the proofs of POA bounds in these\nlectures are “smoothness arguments.” As such, they apply to relatively per-\nmissive and tractable equilibrium concepts like correlated and coarse corre-\nlated equilibria.\n8.Potential games (Lectures 13 and 16). In many classes of games, including\nrouting, location, and network cost-sharing games, players are inadvertently\nstriving to optimize a potential function. Every potential game has at least\none pure Nash equilibrium and best-response dynamics always converges.\nPotential functions are also useful for proving POA-type bounds.\n9.No-regret algorithms (Lectures 17–18). No-regret algorithms exist, includ-\ning simple ones with optimal regret bounds, like the multiplicative weights\nalgorithm. If each agent of a repeatedly played game uses a no-regret or\nno-swap-regret algorithm to choose her mixed strategies, then the time-\naveraged history of joint play converges to the sets of coarse correlated equi-\nlibria or correlated equilibria, respectively. These two equilibrium concepts\nare computationally tractable, as are mixed Nash equilibria in two-player\nzero-sum games.\n10.Complexity of equilibrium computation (Lectures 19–20). The problem of\ncomputing a Nash equilibrium appears computationally intractable in gen-\neral.PLS -completeness and PPAD -completeness are analogs of NP-\ncompleteness tailored to provide evidence of intractability for pure and\nmixed equilibrium computation problems, respectively.\nReferences\n[1] F. Brandt, V . Conitzer, U. Endriss, J. Lang, and A. D. Procaccia, editors.\nHandbook of Computational Social Choice . Cambridge University Press,\n2016.\n[2] J. D. Hartline. Mechanism design and approximation. Book in preparation,\n2017.\n[3] N. Nisan, T. Roughgarden, É. Tardos, and V . Vazirani, editors. Algorithmic\nGame Theory . Cambridge University Press, 2007.\n[4] D. C. Parkes and S. Seuken. Economics and computation. Book in prepara-\ntion, 2017.\n[5] T. Roughgarden. Twenty Lectures on Algorithmic Game Theory . Cambridge\nUniversity Press, 2016.\n[6] Y . Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-\nTheoretic, and Logical Foundations . Cambridge University Press, 2009.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hiL-efPew1u",
"year": null,
"venue": "Bull. EATCS 2014",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/237/228",
"forum_link": "https://openreview.net/forum?id=hiL-efPew1u",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Structure vs Combinatorics in Computational Complexity",
"authors": [
"Boaz Barak"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Structure vs Combinatorics in Computational\nComplexity\u0003\nBoaz Barak\nMicrosoft Research\nAbstract\nSome computational problems seem to have a certain “structure” that is manifested\nin non-trivial algorithmic properties, while others are more “unstructured” in the sense\nthat they are either “very easy” or “very hard”. I survey some of the known results\nand open questions about this classification and its connections to phase transitions,\naverage-case complexity, quantum computing and cryptography.\nComputational problems come in all di \u000berent types and from all kinds of applications,\narising from engineering as well the mathematical, natural, and social sciences, and in-\nvolving abstractions such as graphs, strings, numbers, and more. The universe of potential\nalgorithms is just as rich, and so a priori one would expect that the best algorithms for\ndi\u000berent problems would have all kinds of flavors and running times. However natural\ncomputational problems “observed in the wild” often display a curious dichotomy — either\nthe running time of the fastest algorithm for the problem is some small polynomial in the\ninput length (e.g., O(n) orO(n2)) or it is exponential (i.e., 2\u000fnfor some constant \u000f > 0).\nMoreover, while indeed there is a great variety of e \u000ecient algorithms for those problems\nthat admit them, there are some general principles such as convexity (i.e., the ability to\nmake local improvements to suboptimal solutions or local extensions to partial ones) that\nseem to underly a large number of these algorithms.1This phenomenon is also related to\nthe “unreasonable e \u000bectiveness” of the notion of NP-completeness in classifying the com-\nplexity of thousands of problems arising from dozens of fields. While a priori you would\nexpect problems in the class NP (i.e., those whose solution can be e \u000eciently certified) to\nhave all types of complexities, for natural problems it is often the case that they are either\n\u0003This is an adaptation of the blog post http://windowsontheory.org/2013/10/07/\nstructure-vs-combinatorics-in-computational-complexity/\n1The standard definition of “convexity” of the solution space of some problem only applies to continuous\nproblems and means that any weighted average of two solutions is also a solution. However, I use “convexity”\nhere in a broad sense meaning having some non-trivial ways to combine several (full or partial) solutions to\ncreate another solution; for example having a matroid structure, or what’s known as “polymorphisms” in the\nconstraint-satisfaction literature [5, 25, 31].\nin P (i.e., e \u000eciently solveable) or are NP-hard (i.e., as hard as any other problem in NP,\nwhich often means complexity of 2\u000fn, or at least 2n\u000ffor some constant \u000f >0).\nTo be sure, none of these observations are universal laws. In fact there are theorems show-\ning exceptions to such dichotomies: the Time Hierarchy Theorem [20] says that for essen-\ntially any time-complexity function T(\u0001) there is a problem whose fastest algorithm runs\nin time (essentially) T(n). Also, Ladner’s Theorem [27] says that, assuming P ,NP, there\nare problems that are neither in P nor are NP-complete. Moreover, there are some natural\nproblems with apparent “intermediate complexity”. Perhaps the most well known exam-\nple is the Integer Factoring problem mentioned below. Nevertheless, the phenomenon of\ndichotomy, and the related phenomenon of recurring algorithmic principles across many\nproblems, seem far too prevalent to be just an accident, and it is these phenomena that are\nthe topic of this essay.\nI believe that one reason underlying this pattern is that many computational problems, in\nparticular those arising from combinatorial optimization, are unstructured . The lack of\nstructure means that there is not much for an algorithm to exploit and so the problem is\neither “very easy”— e.g., the solution space is simple enough so that the problem can be\nsolved by local search or convex optimization2— or it is “very hard”— e.g., it is NP-hard\nand one can’t do much better than exhaustive search. On the other hand there are some\nproblems that posses a certain (often algebraic) structure , which typically is exploitable in\nsome non-trivial algorithmic way. These structured problems are hence never “extremely\nhard”, but they are also typically not “extremely easy” since the algorithms solving them\ntend to be more specialized, taking advantage of their unique properties. In particular, it is\nharder to understand the complexity of these algebraic problems, and they are more likely\nto yield algorithmic surprises.\nI do not know of a good way to formally classify computational tasks into combinato-\nrial/unstructured vs.algebraic /structured ones, but in the rest of this essay I try to use\nsome examples to get a better sense of the two sides of this divide. The observations be-\nlow are not novel, though I am not aware of explicit expositions of such a classification\n(and would appreciate any pointers, as well as any other questions or critique). As argued\nbelow, more study into these questions would be of significant interest, in particular for\ncryptography and average-case complexity.\n1 Combinatorial /Unstructured problems\nThe canonical example of an unstructured combinatorial problem is SAT— the task of\ndetermining, given a Boolean formula 'in variables x1;:::; xnwith the operators :;^;_,\nwhether there exists an assignment xto the variables that makes '(x) true. SAT is an NP-\n2Of course even if the algorithm is simple, analyzing it can be quite challenging, and actually obtaining\nthe fastest algorithm, as opposed to simply one that runs in polynomial time, often requires additional highly\nnon-trivial ideas.\nFigure 1: An illustration of the solution space geometry of a random SAT formula, where each point\ncorresponds to an assignment with height being the number of constraints violated by the assignment. The\nleft figure depicts the “ball” regime, where a satisfying assignment can be found at the bottom of a smooth\n“valley” and hence local algorithms will quickly converge to it. The right figure depicts the “shattered”\nregime where the surface is very ragged, with an exponential number of crevices and local optima, thus local\nalgorithms (and as far as we know any algorithm) will likely fail to find a satisfying assignment. Figures\ncourtesy of Amin Coja-Oghlan.\ncomplete problem, which means it cannot be solved e \u000eciently unless P =NP. In fact, the\nExponential Time Hypothesis [21] posits that every algorithm solving SAT must take at\nleast 2\u000fntime for some \u000f > 0. SAT illustrates the above dichotomy in the sense that its\nnatural restrictions are either as hard as the general, or become easily easily solvable, as in\nthe case of the 2SAT problem (where the formula is in conjunctive normal form with each\nclause of arity 2) that can be solved e \u000eciently via a simple propagation algorithm. This\nobservation applies much more generally than SAT. In particular the widely believed Feder-\nVardi dichotomy conjecture [17] states that every constraint satisfaction problem (CSP) is\neither NP hard or in P. In fact, researchers conjecture [5] (and have partially confirmed)\nthe stronger statement that every CSP can either be solved by some specific algorithms of\nlow polynomial-time (such as propagation or generalizations of Gaussian elimination) or is\nNP hard via a linear blowup reduction from SAT, and hence (under the Exponential Time\nHypothesis) cannot be solved faster than 2\u000fntime for some \u000f >0.3\nRandom SAT formulas also display a similar type of dichotomy. Recent research into\nrandom k-SAT (based also on tools from statistical physics) suggests that they have multiple\nthresholds where the problem changes its nature (see, e.g. [12, 16, 14] and the references\nwithin). When the density\u000b(i.e., ratio of constraints to variables) of the formula is larger\nthan some number \u000bs(equal roughly to 2kln 2) then with high probability the formula is\n“overconstrained” and no satisfying assignment exists. There is some number \u000bd< \u000b s\n(equal roughly to 2klnk=k), such that for \u000b < \u000b d, the space of satisfying assignments\n3The main stumbling block for completing the proof is dealing with those CSPs that require a Gaussian-\nelimination type algorithm to solve; one can make the argument that those CSP’s actually belong to the\nalgebraic side of our classification, further demonstrating that obtaining precise definitions of these notions\nis still a work in progress. Depending on how it will be resolved, the Unique Games Conjecture, discussed\nhere [3], might also give rise to CSP’s with “intermediate complexity” in the realm of approximation al-\ngorithms . Interestingly, both these issues go away when considering random, noisy, CSP’s, as in this case\nsolving linear equations becomes hard, and solving Unique Games becomes easy.\nfor a random formula looks roughly like a discrete ball, and, due to this relatively simple\ngeometry some local-search type algorithms can succeed in finding satisfying assignments.\nHowever for \u000b2(\u000bd;\u000bs), satisfying assignments still exist, but the geometry of the solution\nspace becomes vastly di \u000berent, as it shatters into exponentially many clusters, each such\ncluster separated from the others by a sea of assignments that violate a large number of\nthe constraints, see Figure 1. In this regime no e \u000ecient algorithm is known to find the\nsatisfying assignment, and it is possible that this is inherently hard [2, 37].4\nDichotomy means that when combinatorial problems are hard, then they are typically very\nhard, not just in the sense of not having a subexponential algorithm, but they also can’t\nbe solved non-trivially in some intermediate computational models that are stronger than\nP but cannot solve all of NP such as quantum computers, statistical zero knowledge, and\nothers. In particular it’s been observed by several people that for combinatorial problems\nthe existence of a good characterization (i.e., the ability to e \u000eciently verify both the ex-\nistence and non-existence of a solution) goes hand-in-hand with the existence of a good\nalgorithm. Using complexity jargon, in the realm of combinatorial optimization it seems to\nhold that P =NP\\coNP, even though we believe this is false in general. Indeed, for many\ncombinatorial problems such as matching, max flow, planarity, etc.. demonstrating a good\ncharacterization is an important step toward finding an e \u000ecient algorithm. This is related\nto the notion of duality in convex programming, which is often the method of choice to\nsolve such problems.\nCombinatorial problems can be quite useful for cryptography . It is possible to obtain\none-way functions from random instances of combinatorial problems such as SAT and\nClique [2, 24]. Moreover, the problem of attacking a cryptographic primitive such as a\nblock cipher or a hash function can itself be considered a combinatorial problem (and in-\ndeed this connection was used for cryptanalysis [29]). However, these are all private key\ncryptographic schemes, and do not allow two parties to communicate securely without first\nexchanging a secret key. For the latter task we need public key cryptography , and as we\ndiscuss below, the currently known and well-studied public key encryption schemes all rely\nonalgebraic computational problems.\n2 Algebraic /Structured problems\nFactoring is a great example of an algebraic problem; this is the task of finding, given an\nn-bit integer N, the prime numbers p1;:::; pksuch that N=p1\u0001\u0001\u0001pk. No polynomial time\nalgorithm is known for F actoring , but it had seen some non-trivial algorithmic advances.\n4TheSurvey Propagation Algorithm [10] is a very interesting algorithm that arose from statistical physics\nintuition, and is experimentally better than other algorithms at solving random k-SAT formulas for small k\nsuch k=3;4. However, it is believed, that at least for larger k, it too cannot succeed in the regime where the\nsolution space geometry shatters [34, 15]. The current best known algorithm for random k-SAT for large kis\ngiven in [13].\nWhile the natural trial-division algorithm takes roughly 2n=2steps to solve F actoring , the\nNumber Field Sieve algorithm, which is the current best, takes roughly 2n1=3polylog (n)steps\n(see [30]). F actoring can also be solved in polynomial-time on quantum computers us-\ning Shor’s Algorithm [35]. Finally, F actoring (or more accurately, the decision problem\nobtained by looking at individual bits of the output) is also in the class NP \\coNP, which\nmeans that one can e \u000eciently verify the value of a particular bit of the answer, no matter\nif this value is zero or one. These results almost certainly mean that F actoring isnotNP\ncomplete.\nThere is another, more subjective sense, in which I find F actoring di\u000berent from SAT.\nI personally would be much more shocked by a 2pn-time algorithm for SAT than by a\n2n1=6-time algorithm for F actoring . The reason is that, while people have found clever\nways to speed up the 2ntime exhaustive search algorithm for SAT (especially on certain\ntypes of instances), these approaches all seem to inherently require exponential time, and\nare not as qualitatively di \u000berent from exhaustive search in the way that the number field\nsieve is di \u000berent from trial division. In contrast, F actoring clearly has strong algebraic\nstructure that we do not completely understand, and perhaps have not reached the limit of\nits exploitation by algorithms. To see that this is not completely implausible, consider the\nproblem of computing the discrete logarithm in fields of small characteristic. This problem\nshares many properties with F actoring , and it also shared the property of having a best-\nknown running time of 2n1=3polylog (n)until this was recently improved to 2n1=4polylog (n)and then\nto 2polylog (n)[23, 4].\nNot all algebraic problems are hard. Factoring univariate polynomials over finite fields can\nbe solved e \u000eciently using the Berlekamp or Cantor-Zassenhaus algorithms (see e.g. [36,\nChapter 21]). This algorithm also exemplifies the statement above, that algorithms for al-\ngebraic problems are often very specialized and use non-trivial properties of the problem’s\nstructure. For this reason, it’s harder to predict with confidence what is the best algorithm\nfor a given algebraic problems, and over the years we have seen several surprising algo-\nrithms for such problems, including, for example, the fast matrix multiplication algorithms,\nthe non-trivial factoring algorithms and deterministic primality testing, as well as the new\nalgorithm for discrete logarithm over small-characteristic fields mentioned above.\nRelation to cryptography. Algebraic problems are very related to public key cryptogra-\nphy. The most widely used public key cryptosystem is RSA, whose security relies on the\nhardness of F actoring . The current subexponential algorithms for F actoring are the reason\nwhy we use RSA keys of 1024 or 2048 bits, even though even the yet-to-built exaflop super-\ncomputers would take thousands of years to perform, say, 2100computational operations.\nThis also demonstrates how fragile is RSA to any surprising algorithmic advances. If the\nexponent of the best factoring algorithm would halve (i.e., change from 1 =3 to 1=6) then,\nroughly speaking, to get equivalent security we would need to square the size of the key.\nSince the RSA encryption and decryption algorithms take time which is at least quadratic\nin the size of the key, that would make RSA pretty impractical.\nCryptosystems based on the discrete logarithm problem in elliptic curves yield one alter-\nnative to RSA which currently is not known to be broken in subexponential time. Elliptic-\ncurve discrete log is of course also very much an algebraically structured problem, and so,\nI would argue, one in which further algorithmic surprises are hard to rule out. Moreover,\nlike factoring, this problem can be solved in polynomial time by quantum computers , using\nShor’s algorithm.\nThe only other public key cryptosystems that are researched enough to have some confi-\ndence in their security are based on decoding problems for linear codes or integer lattices.\nThese problems are not known to have subexponential algorithms, classical or quantum.\nMoreover, some variants of these problems are actually NP-hard . Specifically, theses prob-\nlem are parameterized by a number \u000bwhich is the approximation factor , where smaller \u000b\nmeans the problem is harder. For example, the shortest vector problem in a lattice can be\nsolved e \u000eciently for\u000b\u0015cn(where c>1 is some constant and nis the dimension of the\nlattice, which is related to the length of the input), and the problem is NP hard for \u000b\u0014n\u000e\n(where\u000e=\u000e(n) is some function of ntending slowly to zero). For this reason lattice prob-\nlems were once seen as a potential approach to getting both private and public crypto based\non the minimal assumption that P ,NP, which in particular would yield public key crypto\nbased on unstructured problems such as SAT. However, we only know how to get public\nkey crypto from these problems for \u000b=nefor some e>1=2 while we have reason to\nbelieve that for \u000b>n1=2the problem does actually possess algebraic (or at least geometric)\nstructure; this is because in this range the problem has a “good characterization” (i.e., in\nNP\\coNP or AM\\coAM). A similar phenomenon also occurs for other problems such as\nlearning parity with noise and random 3SAT (see discussion in [1])— there seem to be two\nthresholds\u000bG< \u000b Esuch that for \u000b < \u000b Gthe problem is hard and arguably unstructured,\nfor\u000b2(\u000bG;\u000bE) the problem becomes useful for public key cryptography, but also seems\nto suddenly obtain some structure such as a “good characterization”, while for \u000b>\u000b Ethe\nproblem becomes easy. Another sign of potential structure in lattice problems is the exis-\ntence of a subexponential quantum algorithm for the hidden subgroup problem in dihedral\ngroups, which is related to these problems [26, 32].\nThe bottom line is that based on the currently well studied schemes, structure is strongly\nassociated with (and perhaps even implied by) public key cryptography.5This is troubling\nnews, since it makes public key crypto somewhat of an “endangered species” that could be\nwiped out by a surprising algorithmic advance. Therefore the question of whether structure\nisinherently necessary for public key crypto is not only of mathematical interest but also\nof practical importance as well. Cryptography is not just an application of this classifica-\ntion but also provides a useful lens on it. The distinction between private key and public\n5I stress that it is not known that public key cryptography necessitates any structure beyond that needed\nfor private key cryptography. It is known one cannot base public key cryptography on private key cryptogra-\nphy via black-box reductions [22, 9]. The best non-black-box negative result is that the existence of a secure\nhomomorphic encryption scheme implies that AM\\coAM *BPP, as any such scheme has a statistical reran-\ndomization procedure [33] which implies that it can be broken using an oracle to the S tatistical Difference\nproblem which is in the class SZK \u0012AM\\coAM, see also [7].\nkey crypto mirrors the distinction between unstructured and structured problems. In the\nprivate key world, there are many di \u000berent constructions of (based on current knowledge)\napparently secure cryptosystems; in fact, one may conjecture (as was done by Gower [18])\nthat if we just combined a large enough number of random reversible local operations then\nwe would obtain a secure block cipher. In contrast, for public key cryptography, finding\na construction that strikes the right balance between structure and hardness is a very hard\ntask, and we still only know of a handful or so such constructions.\n3 A di \u000berent approach to average case complexity\nI am particularly interested in this classification in the context of average-case complexity .\nIn the case of worst-case complexity, while we have not yet managed to prove that P ,NP,\ncomplexity theorists achieved something like the next best thing— classifying a large num-\nber of problems into hard and easy ones based on this single assumption. We have not been\nable to replicate this success in average case complexity, and there is a good reason for that.\nOur main tool for basing one assumption on another one— the reduction — is extremely\nproblematic in average case complexity, since there are inherent reasons why a reduction\nwould not preserve the distribution of the inputs. To illustrate this, suppose that we tried\nto show that an average-case problem Ais no harder than an average-case problem Busing\na standard Karp reduction f(i.e., f:f0;1gn!f0;1gmis a function mapping an A-input\nxinto a B-input ysuch that B(y)=A(x)). For simplicity, assume that the input distribu-\ntion for both problems is the uniform distribution. This would imply that for a random\nx2f0;1gn,f(x) should be distributed close to the uniform distribution over f0;1gm. But\nwe cannot expect this to happen in any reasonable reduction, as all of them add gadgets or\nblow up the size of the instance in some way, meaning that m>n, in which case f(x) is\ndistributed over a subset of f0;1gmof size less than 2m\u00001and hence is far from the uniform\ndistribution.6\nThis di \u000eculty is one reason why the theory of average-case complexity is much less de-\nveloped than the theory for worst-case complexity, even though average-case complexity is\nmuch more relevant for many applications. The observations above suggest that at least for\ncombinatorial problems, we might hope for a di \u000berent approach: define a meta conjecture\nthat stipulates that for a whole class of average-case problems, a certain algorithmic frame-\nwork yields the optimal e \u000ecient algorithm, meaning that beating the performance of that\nalgorithm would be infeasible (e.g., take exponential time). To make things more concrete,\n6As further argument that reductions should increase the input length, note that if AandBwere shown\nequivalent by reductions fandgthatshrink the size of the input even by a single bit, then by repeating these\nreductions recursively shows that both AandBcan be solved in polynomial time. This argument can be\nextended to the case that fandgare length preserving, under the assumption that f\u000egis not too close to the\nidentity permutation and that the inputs of length n\u00001 are embedded in the set of inputs of length n. One can\nalso use similar arguments to rule out certain types of probabilistic reductions, even those that increase the\ninput size, if we assume the reduction is e \u000eciently invertible.\nconsider the following hypothesis from the paper [6]:\nRandom CSP Hypothesis. For every predicate P:f0;1gl!f0;1g, if we let\nRandom Max(P) be the problem of estimating the fraction of constraints that\ncan be satisfied for an instance chosen at random, then no e \u000ecient algorithm\ncan obtain a better approximation to R andom Max(P) than\u000b(P), where\u000b(P) is\nthe approximation obtained by the canonical semidefinite program (a type of\nconvex relaxation) to this problem.7\nNote that this is a much more general conjecture that P ,NP, which can be reduced to the\nstatement that a single problem (say worst-case SAT) cannot be e \u000eciently solved. In con-\ntrast, the Random CSP Hypothesis contains an unbounded number of hardness conjectures\n(one for every predicate) that (except in very special cases) are not known to be reducible\nto one another. Of course, to derive a concrete assumption about a predicate Pfrom this\nhypothesis one needs to calculate \u000b(P), but fortunately for random CSP’s this can be done\neasily— one can of course run the algorithm, but there is also an analytical expression for\nthis quantity.\nDespite it being such a general hypothesis, I don’t think the Random CSP Hypothesis is\nyet general enough— there may well be significant extensions to this hypothesis that are\nstill true, involving combinatorial problems di \u000berent than CSP’s, and distributions di \u000berent\nthan the uniform one. Perhaps with time, researchers will find the “right” meta conjecture\nwhich will capture a large fraction of the problems we consider “combinatorial”.\nAt first brush, it might seem that I’m suggesting to trivialize research in average-case com-\nplexity by simply assuming all the hardness results we wish for. But of course, there is still\na very real challenge to find out if these assumptions are actually true! Given our current\nstate of knowledge, I don’t foresee an unconditional proof of these types of assumptions,\nor even a reduction to a single problem, any time soon. But this doesn’t mean we can’t\ngather evidence on these meta assumptions. Moreover, such assumptions form very “fat\ntargets” for potential refutations. For example, all we have to do to refute the Random\n3CSP Hypothesis is to find a single predicate Pand a single e \u000ecient algorithm Asuch\nthatAgives a better approximation factor than \u000b(P) for R andom Max(P). In fact, there\nare very natural candidate algorithms to do just that, including in particular more compli-\ncated convex programs known as semidefinite programming hierarchies . Analyzing the\nperformance of such algorithms raises some fascinating mathematical questions, many of\nwhich we haven’t yet been able to solve, and this is a very interesting research area in its\nown right. With e \u000bort and time, if no refutation is found, we might gain confidence in\n7The notion of “chosen at random” roughly corresponds to the uniform distribution over inputs, or the\nuniform distribution with an appropriately “planted” satisfying assignment, with the precise notion of “es-\ntimation” being the appropriate one for these di \u000berent models; see the paper for details. The Random CSP\nHypothesis deals with the overconstrainted regime of random SAT formulas, as opposed to the undercon-\nstrained regime in discussed above in the the context of phase transitions.\nthe veracity of such meta assumptions, and obtain a much clearer view of the landscape of\naverage-case complexity, and complexity at large.\nConclusions\nWhile much of what I discussed consists of anecdotal examples, I believe that some works,\nsuch as those related to the Feder-Vardi conjecture or to phase transitions in random CSP’s,\no\u000ber a glimpse of a potential general theory of the complexity of combinatorial problems.\nI think there is room for some ambitious conjectures to try to illuminate this area. Some of\nthese conjectures might turn out to be false, but we can learn a lot from exploring them. Un-\nderstanding whether the “markers of structure” such as subexponential algorithms, quan-\ntum algorithms, good characterization, usefulness for public key cryptography, etc.. need\nalways go together would be extremely useful for many applications, and in particular cryp-\ntography. Even more speculatively, perhaps thinking about these issues can help towards\nthe goal of unconditional results. The richness of the space of algorithms is one of the main\n“excuses” o \u000bered for our relatively little success in proving unconditional lower bounds. If\nindeed this space is much more limited for combinatorial problems, perhaps this can help\nin finding such proofs.8\nAcknowledgements. I thank Scott Aaronson, Dimitris Achlioptas, Amin Coja-Oghlan,\nTim Gowers, Joshua Grochow, David Steurer, and Moshe Vardi for useful comments and\ndiscussion.\nReferences\n[1] Benny Applebaum, Boaz Barak, and Avi Wigderson. Public-key cryptography from\ndi\u000berent assumptions. In Leonard J. Schulman, editor, STOC , pages 171–180. ACM,\n2010.\n[2] Dimitris Achlioptas and Amin Coja-Oghlan. Algorithmic barriers from phase transi-\ntions. In FOCS , pages 793–802, 2008.\n8In some sense, such an approach to proving lower bounds is dual to Mulumley’s approach of “Geometric\nComplexity Theory” (GCT) [28, 8, 11, 19]. The GCT approach attempts to use specific properties of struc-\ntured functions such as the permanent to obtain a lower bound; these properties are actually “constructive” in\nthe Razborov-Rudich sense of Natural Proofs. If we focused on combinatorial, “unstructured”, problems then\nwe would need to come up with general properties guaranteeing hardness, that would also apply to random\nfunctions (which are the ultimate unstructured functions). The Razborov-Rudich result implies such proper-\nties would be inherently non-constructive. Valiant’s approach for proving certain types of lower bounds via\nMatrix Rigidity [38] can be thought of as an instance of the latter approach.\n[3] Boaz Barak. Truth vs. proof in computational complexity. Bulletin of the EATCS ,\n108:130–142, 2012.\n[4] Razvan Barbulescu, Pierrick Gaudry, Antoine Joux, and Emmanuel Thomé. A quasi-\npolynomial algorithm for discrete logarithm in finite fields of small characteristic.\nIACR Cryptology ePrint Archive , 2013:400, 2013.\n[5] Andrei Bulatov, Peter Jeavons, and Andrei Krokhin. Classifying the complexity of\nconstraints using finite algebras. SIAM Journal on Computing , 34(3):720–742, 2005.\nPreliminary version in ICALP ’00.\n[6] Boaz Barak, Guy Kindler, and David Steurer. On the optimality of semidefinite relax-\nations for average-case and generalized constraint satisfaction. In Robert D. Kleinberg,\neditor, ITCS , pages 197–214. ACM, 2013.\n[7] Andrej Bogdanov and Chin Ho Lee. Limits of provable security for homomorphic\nencryption. In Ran Canetti and Juan A. Garay, editors, CRYPTO (1) , volume 8042 of\nLecture Notes in Computer Science , pages 111–128. Springer, 2013.\n[8] Peter Bürgisser, J. M. Landsberg, Laurent Manivel, and Jerzy Weyman. An overview\nof mathematical issues arising in the geometric complexity theory approach to vp;vnp.\nSIAM J. Comput. , 40(4):1179–1209, 2011.\n[9] Boaz Barak and Mohammad Mahmoody-Ghidary. Merkle puzzles are optimal - an\no(n2)-query attack on any key exchange from a random oracle. In Shai Halevi, ed-\nitor, CRYPTO , volume 5677 of Lecture Notes in Computer Science , pages 374–390.\nSpringer, 2009.\n[10] Alfredo Braunstein, Marc Mézard, and Riccardo Zecchina. Survey propagation: An\nalgorithm for satisfiability. Random Structures &Algorithms , 27(2):201–226, 2005.\n[11] Peter Bürgisser. Prospects for geometric complexity theory. In IEEE Conference on\nComputational Complexity , page 235. IEEE, 2012.\n[12] Amin Coja-Oghlan. Random constraint satisfaction problems. Electronic Proceedings\nin Theoretical Computer Science , 9, 2009. Available as arXiv preprint 0911.2322.\n[13] Amin Coja-Oghlan. A better algorithm for random k-sat. SIAM J. Comput. ,\n39(7):2823–2864, 2010.\n[14] Amin Coja-Oghlan and Konstantinos Panagiotou. Going after the k-sat threshold. In\nDan Boneh, Tim Roughgarden, and Joan Feigenbaum, editors, STOC , pages 705–714.\nACM, 2013.\n[15] Amin Coja-Oghlan and Angelica Y . Pachon-Pinzon. The decimation process in ran-\ndom k-sat. SIAM J. Discrete Math. , 26(4):1471–1509, 2012.\n[16] Amir Dembo, Andrea Montanari, Allan Sly, and Nike Sun. The replica symmetric\nsolution for potts models on d-regular graphs. arXiv preprint arXiv:1207.5500 , 2012.\n[17] Tomás Feder and Moshe Y . Vardi. The computational structure of monotone monadic\nsnp and constraint satisfaction: A study through datalog and group theory. SIAM J.\nComput. , 28(1):57–104, 1998.\n[18] W. T. Gowers. An almost m-wise independent random permutation of the cube. Com-\nbinatorics, Probability and Computing , 5:119–130, 6 1996.\n[19] Joshua A. Grochow. Unifying and generalizing known lower bounds via geometric\ncomplexity theory. arXiv , abs/1304.6333, 2013.\n[20] Juris Hartmanis and Richard E Stearns. On the computational complexity of algo-\nrithms. Transactions of the American Mathematical Society , 117:285–306, 1965.\n[21] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems have\nstrongly exponential complexity? J. Comput. Syst. Sci. , 63(4):512–530, 2001.\n[22] Russell Impagliazzo and Steven Rudich. Limits on the provable consequences of one-\nway permutations. In David S. Johnson, editor, STOC , pages 44–61. ACM, 1989.\n[23] Antoine Joux. A new index calculus algorithm with complexity l(1 /4+o(1)) in very\nsmall characteristic. IACR Cryptology ePrint Archive , 2013:95, 2013.\n[24] Ari Juels and Marcus Peinado. Hiding cliques for cryptographic security. Designs,\nCodes and Cryptography , 20(3):269–280, 2000.\n[25] Gábor Kun and Mario Szegedy. A new line of attack on the dichotomy conjecture. In\nSTOC , pages 725–734, 2009.\n[26] Greg Kuperberg. A subexponential-time quantum algorithm for the dihedral hidden\nsubgroup problem. SIAM Journal on Computing , 35(1):170–188, 2005.\n[27] Richard E. Ladner. On the structure of polynomial time reducibility. J. ACM ,\n22(1):155–171, 1975.\n[28] Ketan Mulmuley. The gct program toward the pvs.npproblem. Commun. ACM ,\n55(6):98–107, 2012.\n[29] Ilya Mironov and Lintao Zhang. Applications of sat solvers to cryptanalysis of hash\nfunctions. In Armin Biere and Carla P. Gomes, editors, SAT, volume 4121 of Lecture\nNotes in Computer Science , pages 102–115. Springer, 2006.\n[30] Carl Pomerance. A tale of two sieves. In Notices Amer. Math. Soc . Citeseer, 1996.\n[31] Prasad Raghavendra. Complexity of constraint satisfaction problems: Exact and\napproximate, 2010. Talk at the Institute for Advanced Study, video available on\nhttp://video.ias.edu/csdm/complexityconstraint .\n[32] Oded Regev. Quantum computation and lattice problems. SIAM J. Comput. ,\n33(3):738–760, 2004.\n[33] Ron Rothblum. Homomorphic encryption: From private-key to public-key. In Yuval\nIshai, editor, TCC , volume 6597 of Lecture Notes in Computer Science , pages 219–\n234. Springer, 2011.\n[34] Federico Ricci-Tersenghi and Guilhem Semerjian. On the cavity method for decimated\nrandom constraint satisfaction problems and the analysis of belief propagation guided\ndecimation algorithms. Journal of Statistical Mechanics: Theory and Experiment ,\n2009(09):P09001, 2009.\n[35] Peter W. Shor. Algorithms for quantum computation: Discrete logarithms and factor-\ning. In FOCS , pages 124–134. IEEE Computer Society, 1994.\n[36] Victor Shoup. A computational introduction to number theory and algebra . Cambridge\nUniversity Press, 2006.\n[37] Allan Sly. Computational transition at the uniqueness threshold. In FOCS , pages\n287–296. IEEE Computer Society, 2010.\n[38] Leslie Valiant. Graph-theoretic arguments in low-level complexity. Mathematical\nFoundations of Computer Science 1977 , pages 162–176, 1977.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0_ia-2CzCR",
"year": null,
"venue": "Bull. EATCS 2012",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/53/49",
"forum_link": "https://openreview.net/forum?id=0_ia-2CzCR",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Truth vs. Proof in Computational Complexity",
"authors": [
"Boaz Barak"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Bulletin of the EA TCS no 108, pp. 130142, October 2012\n/circlecopyrt c Eur opean Association for Theor etical Computer ScienceTheLogic in Computer Science Column\nby\nYuriGurevich\nMicrosoft Research\nOne Microsoft Way, Redmond WA 98052, USA\[email protected]\nTruth vs .Proof in Computational\nComplexity\nBoaz Barak∗\nTheoretical Computer Science is blessed (or cursed?) with many open problems.\nFor some of these questions, such as the PvsNPproblem, it seems like it could\nbe decades or more before they reach resolution. So, if we have no proof eitherway, what do we assume about the answer? We could remain agnostic, sayingthat we simply don’t know, but there can be such a thing as too much skepticism\nin science. For example, Scott Aaronson once claimed [2] that in other sciences\nP/nequalNPwould by now have been declared a law of nature. I tend to agree. After\nall, we are trying to uncover the truth about the nature of computation and thisquest won’t go any faster if we insist on discarding all evidence that is not in theform of mathematical proofs from first principles.\nBut what other methods can we use to get evidence for questions in computational\ncomplexity? After all, it seems completely hopeless to experimentally verify evena non-asymptotic statement such as “There is no circuit of size 2\n100that can solve\n∗Microsoft Research New England, Cambridge, MA, [email protected] . Adapted from a\npost at the “Windows on Theory” blog http://windowsontheory.org/\nThe Bulletin of the EATCS\n1313SAT on 10,000 variables”. There is in some sense only one tool us scientists\ncan use to predict the answer to open questions, and this is Occam’s Razor. Thatis, if we want to decide whether an answer to a certain question is YesorNo,w e\ntry to think of the simplest /nicest possible world consistent with our knowledge\nin which the answer is Yes, and the simplest such world in which the answer isNo. If one of these worlds is much nicer than the other, that would suggest thatit is probably the true one. For example, if assuming the answer to the questionis “Yes” yields several implications that have been independently verified, whilewe must significantly contort the “No” world in order to make it consistent withcurrent observations, then it is reasonable to predict that the answer is “Yes”.\nIn this essay, I attempt to do this exercise for two fascinating conjectures for\nwhich, unlike the PvsNPproblem, there is no consensus on their veracity: Khot’s\nUnique Games Conjecture [23] and Feige’s Random 3SAT Hypothesis [16]. This\nis both to illuminate the state of the art on these particular conjectures, and to\ndiscuss the general issue of what can be considered as valid evidence for openquestions in computational complexity.\n1 The Unique Games Conjecture\nKhot’s Unique Games Conjecture (UGC) [23] states that a certain approximation\nproblem (known as “Unique Games” or UG) is NP hard. I’ll define the UGprob-\nlem below, but one benefit of using Occam’s Razor is that we can allow ourselves\nto discuss a closely related problem known as Small Set Expansion (SSE ), which I\nfind more natural than the UGproblem. The SSE problem can be described as the\nproblem of “finding a cult inside a social network”:1you’re given a graph Gover n\nvertices, and you know that it contains a set Sof at most, say, n/lognvertices that\nis “almost isolated” from the rest of the graph, in the sense that a typical memberofShas 99% of its neighbors also inside S. The goal is to find Sor any set S\n/primeof\nsimilar size that is reasonably isolated (say having more than half of the neighborsinside it). Formally, for every /epsilon1,δ> 0 and number k, the computational problem\nSSE (/epsilon1,δ, k) is to distinguish, given a d-regular graph G=(V,E), between the case\nthat there is a set S⊆Vwith|S|≤| V|/kand with|E(S,S)|≥(1−/epsilon1)δd|S|, and\nthe case that for every S⊆Vwith|S|≤| V|/k,|E(S,S)|≤δ|S|. The following\nconjecture seems very closely related to the unique games conjecture:\n1I use the term “cult” since we’re looking for a set in which almost all connections stay inside.\nIn contrast, a “community” would correspond to a set containing a higher than expected numberof connections. The computational problem associated with finding such a “community” is thedensest k-subgraph problem [17, 9, 10], and it seems considerably harder than either the\nUGor\nSSE problems.\nBEATCS no 108 THE EA TCS COLUMNS\n132Conjecture 1 (Small Set Expansion Hypothesis (SSEH) [33]) .F or every/epsilon1,δ> 0\nthere exists k such that SSE (/epsilon1,δ, k)is NP-hard.\nAlmost all that I’ll say in this essay will hold equally well for the SSE and UG\nproblems, and so the reader can pretend that the Unique Games Conjecture is the\nsame as the Small Set Expansion Hypothesis without much loss in understanding.But for the sake of accuracy and completeness, I’ll now define the Unique Gamesproblem, and explain some of its relations to the\nSSE problem. The UGproblem\nis also parameterized by /epsilon1,δ, k. The input for the UG(/epsilon1,δ, k) problem is a set of\nmequations on nvariables x1,..., xnover the alphabet [ k]={1,..., k}. Each\nequation has the form xi=πi,j(xj), whereπi,jis a permutation over [ k]. The\ncomputational task is to distinguish between the case that there is an assignment\nto the variables that satisfies at least (1 −/epsilon1)mequations, and the case that no\nassignment satisfies more than δmof them. The formal statement of the Unique\nGames Conjecture is the following:\nConjecture 2 (Unique Games Conjecture (UGC) [23]) .F or every/epsilon1,δ> 0there\nexists k such that UG(/epsilon1,δ, k)is NP-hard.\nOne relation between the UGand SSE problems is that we can always transform\nan instanceΨofUGinto the graph GΨon the vertex set V=[n]×[k] containing\nan edge between the vertices ( i,a) and ( j,b) (for i,j∈[n] and a,b∈[k]) if and\nonly if there is an equation in Ψof the form ai=πi,j(xj) with a=πi,j(b). Now,\nan assignmentσ∈[k]nto the variables of Ψwill translate naturally into a set\nSσ⊆V(GΨ)o f n=|V|/kvertices containing the vertex ( i,a)iffσi=a. It can\nbe easily verified that edges with both endpoints in Sσwill correspond exactly to\nthe equations ofΨthat are satisfied by σ. One can show that the UGproblem\nhas the same difficulty if every variable in Ψparticipates in the same number d\nof equations, and hence the map Ψ/mapsto→GΨtransforms a UG(/epsilon1,δ, k) instance into\nanSSE (/epsilon1,δ, k) instance, and in fact maps the “Yes case” of UG(/epsilon1,δ, k) into the\n“Yes case” of UG(/epsilon1,δ, k). Alas, this is not a reduction from UGtoSSE , because\nit can map a “No” instance of UGinto a “Yes” instance of SSE . In fact, the only\nreduction known between the problems is in the other direction: Raghavendra and\nSteurer [33] showed that SSE is no harder than UGand hence the UGC implies the\nSSEH. However, all the known algorithmic and hardness results hold equally wellfor\nSSE and UG[32, 34, 3, 14, 35, 8], strongly suggesting that these problems\nhave the same computational di fficulty. Hence in this essay I will treat them as\nequivalent.\nLet us now turn to exploring how natural is the world where the UGC (or SSEH)\nholds, versus the world in which it fails.\nThe Bulletin of the EATCS\n1331.1 The “UGC true” world.\nThere is one aspect in which the world where the UGC is true is very nice indeed.\nOne of the fascinating phenomenons of complexity is the dichotomy exhibited by\nmany natural problems: they either have a polynomial-time algorithm (often witha low exponent) or are NP-hard, with very few examples in between. A strikingresult of Raghavendra [31] showed that the UGC implies a beautiful dichotomyfor a large family of problems, namely the constraint-satisfaction problems (CSP).He showed, that for every CSP P, there is a number α\nUG(P) (which we’ll call the\nUG threshold ofP), such that for every /epsilon1> 0, the problem of maximizing the\nsatisfied constraints of an instance of Pcan be approximated within αUG(P)−\n/epsilon1in polynomial (in fact, quasilinear [37]) time, while if the UGC is true, then\nachieving anαUG(P)+/epsilon1approximation is NP hard.\nThis is truly a beautiful result, but alas there is one wrinkle in this picture: where\nyou might expect that in this dichotomy the hard problems would all be equally\nhard, there is a subexponential algorithm for unique games [3] showing that ifthe UGC is true then some constraint satisfaction problems can be solved in time\n2\nn/epsilon1for some/epsilon1∈(0,1). While those sub-exponential problems are asymptotically\nhard, compared to “proper” NP-hard problems such as SAT, the input sizes when\nthe asymptotic hardness ‘kicks in” will be pretty huge. For example, for the Small\nSet Expansion problem with the parameters above (99% vs 50% approximation),the [3] algorithm will take roughly 2\nn1/10steps which is pretty e fficient for graphs\nwith up to 260or so vertices.\nIn more qualitative terms, many hardness of approximation results for CSP’s ac-\ntually use quasilinear reductions from SAT [30], and so let us define the SA T\nthreshold ofP, to be the the smallest number αSA T(P) for which achieving an\nαSA T(P)+/epsilon1approximation is NP-hard via a quasilinear reduction. In a “dream\nversion” of the UGC, we may have expected that the NP hardness for UGwould\nuse a quasilinear reduction as well, in which case (since Raghavendra’s result uses\na quasilinear reduction) it would show that αUG(P)=αSA T(P) for all P. In par-\nticular, assuming the Exponential Time Hypothesis [22] (namely, the assumption\nthat SAT can’t be solved in 2o(n)time), that would have implied that getting a bet-\nter thanαUG(P) approximation for Ptakes 2n1−o(1)time— essentially as much time\nas taken by the brute force algorithm. However, the subexponential time algo-\nrithm for UGrules out the possibility of such a reduction, and shows that if the\nUGC is true, then at least for some CSP’s the SA T threshold will be strictly larger\nthan the UGthreshold, and the time to approximate them will grow much more\ngradually with the approximation quality, see Figure 1. Whether such a gradualtime/quality tradeoffis more or less beautiful than a sharp jump is in the eyes of\nthe beholder, but it does show that the “dichotomy” picture is more complex than\nBEATCS no 108 THE EA TCS COLUMNS\n134what it initially appears to be.\nRaghavendra’s theorem is perhaps one reason to wish that the UGC was true, but\nhow does the UGC mesh with current knowledge? One obvious way in which thecurrent state of the art supports the conjecture is that we don’t know of any algo-rithm that refutes it by solving the\nSSE orUGproblems (or any other problem\nthey have been reduced to). However, by this we mean that there is no algo-\nrithm proven to solve the problem on allinstances. So there has been an ongo-\ning “battle” between papers showing algorithms that work for natural instances,and papers showing instances that fool natural algorithms. For example, the ba-sic semidefinite program (which is a natural analog of the Geomans-Williamson\nsemidefinite program for Max-Cut [20]) solves the problem on random or ex-panding input graphs [6]. On the other hand, it was shown that there are instancesfooling this program [29] (and some generalizations [32, 28, 27]), along the waydisproving a conjecture of Goemans and Linial. The subexponential algorithm\nmentioned above actually runs much faster on those instances [26], and so forsome time I thought it might actually be a quasi-polynomial time algorithm. But\nit turned out there are instances (based on the Reed-Muller code) that require itto take (almost) subexponential time [11]. Nevertheless, the latest round in this\nbattle was won by the algorithmic side: it turned out that all those papers showinghard instances utilized arguments that can be captured by a sum of squares for-\nmal proof system, which implies that the stronger “Sum-of-Squares” /“Lasserre”\nsemidefinite programming hierarchy\n2can solve them in polynomial time [8]. The\nlatest result also showed connections between the SSE problem and the problems\nof optimizing hypercontractive norms of operators (norms of the form p→qfor\nq>p) and the injective tensor norm problem that arises in quantum information\ntheory.\n1.2 The “UGC False” world.\nWhile the Unique Games Conjecture can fail in a myriad of ways, the simplestworld in which it fails is that there is an e fficient (say polynomial or quasipolyno-\nmial time) algorithm for the\nSSE and UGproblems. And, given current knowl-\nedge, the natural candidate for such an algorithm comes from the “Sum-of-Squares”semidefinite programming hierarchy mentioned above. Indeed, given that\nSSE is\nsuch a fairly natural problem on graphs, it’s quite mind boggling that finding hard\ninstances for it has been so di fficult. Contrast this with seemingly related prob-\n2A semidefinite programming hierarchy is obtained by systematically strengthening a basic\nsemidefinite program with additional constraints. Such hierarchies are parameterized by a numberrof rounds, and optimizing over the r\nthrounds of the hierarchy takes nO(r)time; see [15] for a\nrecent survey.\nThe Bulletin of the EATCS\n135/epsilon1Running time: 2\nn\n/epsilon1±\no(1)\nUG threshold\nSAT threshold01/epsilon1\nApproximation ratio: UG threshold SAT threshold01/epsilon1\nUG threshold SAT threshold01“Dream” U GC True Possible in both worlds “Dream” U GC False\n0 1 0 1 0 1\nFor every CSP P, we can achieve an approximation ratio α<α UG(P)i n2no(1)(in fact ˜O(n)) time,\nand it is believed (assuming the Exponential Time Hypothesis) that achieving approximationratio ofα>α\nSA T(P) takes 2n1−o(1)time, whereαUG(P) andαSA T(P) are the UGand SA T thresh-\nolds defined in Section 1.1. Generally speaking, we do not know the running time required for\nachieving approximation ratios in the interval ( αUG(P),α SA T(P)) and it can range between the\nthree scenarios depicted above. However, the subexponential time algorithm for UGrules out\nthe “dream UGC true” picture for at least some CSP’s. Note that sometimes the UGthreshold\nand SA T threshold coincide (e.g., for 3SAT they both equal 7 /8). For such CSP’s, regardless of\nwhether the UGC is true, it is believed that the time /ratio curve has a discontinuous jump from\nrunning time 2no(1)to time 2n1−o(1).\nFigure 1: Possible running time exponents vs. approximation quality curves for a\nconstraint satisfaction problem.\nlems such as densest k-Subgraph, where random instances seem to be the hardest\ncase. On the other hand, we already know that random instances are notthe hard-\nest ones for SSE , so perhaps those hard instances do exist somewhere and will\neventually be found.\nThere is actually a spectrum of problems “in the unique games flavor” including\nnot just SSE and UG, but also Max-Cut ,Balanced Separator and many others.\nThe cleanest version of the “UGC False” world would be that all of these aresignificantly easier than NP-hard problems, but whether they are all equal in dif-ficulty is still unclear. In particular, while in the “UGC False” world there willbe a 2\nno(1)-time approximation for some CSP’s beyond the UGthreshold, even the\nqualitative behavior of the running time /approximation ratio curve is not known\n(i.e., does it look like the middle or rightmost scenario in Figure 1?).\n1.3 A personal bottom line.\nRegardless of whether the UGC is true, it has been an extremely successful con-jecture in that it led to the development of many new ideas and techniques that\nBEATCS no 108 THE EA TCS COLUMNS\n136have found other applications. I am certain that it will lead to more such ideas\nbefore it is resolved. There are a number of ways that we could get more confi-dence in one of the possibilities. Interestingly, both in the “UGC True” and “UGCFalse” worlds, our current best candidate for the algorithm meeting the time /ratio\ncurve is the “Sum of Squares” semidefinite programming hierarchy. So, in mymind, finding out the approximation quality for\nSSE of, say, polylog( n) rounds\n(corresponding to npolylog( n)running time) of this hierarchy is a pivotal question.\nFinding instances that fool this algorithm would go a long way toward boostingconfidence in the “UGC True” case, especially given that doing so would requireusing new ideas beyond sum-of-squares arguments. Another way to support theUGC is to try to come up with candidate NP-hardness reductions (even withoutanalysis or assuming some gadgets that have yet to be constructed) for proving it,or to show NP-hardness for problems such as Max-Cut that are “morally close”to the UG/SSE questions. On this latter point, there are some hardness results\nfor problems such as 3LIN over the reals [25], L\npsubspace approximation [19],\nand subspace hypercontractivity [8] that have some relation to the UG/SSE ,b u t\nwhether they can be thought having “morally equivalent” complexity to UG/SSE\nis still very much in question. To get confidence in the “UGC False” case we cantry to show that a smallish number of rounds of the sum-of-squares hierarchy cansolve the\nSSE on a larger family of instances than what is currently known. A pet\nquestion of mine is to show that this algorithm works on all Cayley graphs over theBoolean cube. I think that showing this would require ideas that may enable solv-ing the general case as well. Indeed, my current guess is that the UGC is false andthat the sum-of-squares algorithm does solve the problem in a reasonable (e.g.,quasipolynomial) time.\n2 Feige’s Random 3SA T Hypothesis\nUnlike Khot’s conjecture, Feige’s Hypothesis (FH) [16] deals with average-case\ncomplexity . While a counting argument easily shows that with high probability a\nrandom 3SAT formula on nvariables and 1000 nclauses will not be (even close to)\nsatisfiable, the hypothesis states that there is no e fficient algorithm that can certify\nthis fact. Formally, the conjecture is defined as follows:\nConjecture 3 (Feige’s Hypothesis, weak version3[16]) .F or every/epsilon1>0,d∈N,\nand polynomial-time algorithm A that can output either “ SAT ”o r“UNSAT ”,i t\nholds that for su fficiently large n, either\n•Pr[A(ϕ)=UNSAT ]<1/2, whereϕis a random 3SAT formula with n vari-\nables and dn clauses.\nThe Bulletin of the EATCS\n137or\n•There exists a formula ϕon n variables such that there is an assignment\nsatisfying≥1−/epsilon1fraction ofϕ’s clauses, but A (ϕ)=UNSAT .\nThat is, any one-sided error algorithm for 3SAT (i.e., an algorithm that can some-\ntimes say SATon an unsatisfiable instance, but will never say UNSAT on a nearly\nsatisfiable one) will (wrongly) answer SAT on a large fraction of the input for-\nmulas. Feige’s hypothesis (and variants of similar flavor [7, 1]) have been usedto derive various hardness of approximation results. Applebaum, Wigderson andI [4] also used related (though not equivalent) assumptions to construct a public-key cryptosystem, with the hope that basing cryptosystems on such combinatorialproblems will make them immune to algebraic and /or quantum attaches. We note\nthat while the conjecture was originally stated for 3SAT, in a recent manuscriptwith Kindler and Steurer [12] we show that it can be generalized to every con-straint satisfaction problem. Personally I find the k-XOR predicate (i.e., noisy\nsparse linear equations) to be the cleanest version.\nThere is added motivation for trying to study heuristic evidence (as opposed to\nformal proofs) for Feige’s hypothesis. Unlike the UGC, which in principle canbe proven via a PCP-type NP-hardness reduction of the type we’ve seen before,proving FH seems way beyond our current techniques (even if we’re willing toassume standard assumptions such P/nequalNP, the existence of one-way functions,\nor even the hardness of integer factoring). Thus if Feige’s hypothesis is true, ouronly reasonable hope is to show that this holds is by a physics-like process ofaccumulating evidence, rather than by a mathematical proof. Let us now try toexamine this evidence:\n2.1 The “FH True” world.\nOne natural way to refute Feige’s Hypothesis would be to show a 0 .88 (worst-case)\napproximation algorithm for 3SAT. This is an algorithm Bthat given a formula for\nwhich anαfraction of the clauses can be satisfied, returns an assignment satisfying\n0.88αof them. In particular, given as input a satisfiable formula, Bmust return an\nassignment satisfying at least 0 .88 fraction of the clauses. Thus, we can transform\nBinto a one-sided error algorithm Athat answers SATon an instance if and only\nifBreturns such a 0.88-satisfying assignment for it. Since in a random 3SAT\nformula, the maximum fraction of satisfiable clauses is very close to 7 /8=0.875,\n3I call this the weak version since Feige also phrased a version of the hypothesis with /epsilon1=0.\nHowever, I prefer the /epsilon1>0 version as it is more robust and can be applied to other predicates such\nas XOR.\nBEATCS no 108 THE EA TCS COLUMNS\n138the algorithm Awould refute FH. However, Håstad’s seminal result [21] shows\nthat 3SAT doesn’t have such a 0 .88-approximation algorithm, hence giving at\nleast some evidence for the “FH True” world.\nFeige showed that his hypothesis implies several other such hardness of approxi-\nmation results, including some not known before to hold under P/nequalNP; deriving\nsuch results was Feige’s motivation for the hypothesis. But the connection also\nworks in the other direction: verifying the hardness-of-approximation predictions\nof FH can be viewed as giving evidence to the “FH True” world, particulary when(as was the case in [24]) the hardness of approximation results were obtained after\nFeige’s predictions.\nOf course, these hardness of approximation results only relate to worst-case com-\nplexity while the average-case problem could be potentially much easier. We donote however that in many of these cases, these hardness results are believed tohold even with respect to subexponential (e.g. 2\no(n)or perhaps 2n1−Ω(1)) time al-\ngorithms. While this doesn’t imply average-case hardness, it does mean that theset of hard instances cannot be toosmall. Moreover, the most natural candidate\nalgorithms to refute Feige’s hypothesis— the same sum-of-squares relaxationsmentioned above— are known [18, 36] not to succeed in certifying unsatisfiabil-ity of random instances. Also, as Feige shows, this problem is related to randomnoisy 3XOR equations, which is a sparse version of the known and well studiedLearning Parity with Noise problem (see also discussion in [7, 4]).\nThe world in which the generalized form [12] of FH holds is particularly nice in\nthat there is a single algorithm (in fact, the same Goemans-Williamson semidefi-\nnite program from above) that achieves the optimal performance on every random\nconstraint-satisfaction problem.\n2.2 The “FH False” world.\nIf Feige’s Hypothesis is false, then there should be an algorithm refuting it. Nosuch algorithm is currently known. This could be viewed as significant evidence\nfor FH, but the question is how hard people have tried. Random 3-SAT instances\n(and more generally k-SAT or other CSP’s) are actually widely studied and are\nof interest to physicists, and (with few hundred variables) are also part of SATsolving competitions. But the instances studied are typically in the satisfiable\nregime where the number of clauses is su fficiently small (e.g., less than ∼4.26n\nfor 3-SAT) so solutions will actually exist. The survey propagation algorithm [13]does seem to work very well for satisfiable random 3SAT instances, but it does notseem to be applicable in the unsatisfiable range. Survey propagation also seemsto fail on other CSPs, including k-SAT for k>3 [5].\nThe Bulletin of the EATCS\n139While not known to be equivalent, there is a variant of FH where the goal is not\nto certify unsatisfiability of a random 3SAT but to find a planted nearly satisfyingassignment of a random 3XOR instance. Such instances might be more suitablefor computational challenges (a la the RSA Factoring challenge) as well as SATsolving competitions. It would be interesting to study how known heuristics fareon such inputs.\n2.3 A personal bottom line.\nUnlike worst-case complexity, our understanding of average-case complexity isvery rudimentary. This has less to do with the importance of average-case com-plexity, which is deeply relevant not just for studying heuristics but also for cryp-tography, statistical physics, and other areas, and more to do with the lack ofmathematical tools to handle it. In particular, almost every hardness reductionwe know of uses gadgets which end up skewing the distribution of instances.\nI believe studying Feige’s Hypothesis and its ilk (including the conjectures that\nsolution-space shattering implies hardness [5]) o ffer some of our best hopes for\nmore insight into average-case complexity. I don’t know if we will be able to es-tablish a similar web of reductions to the one we have for worst-case complexity,but perhaps we can come up with meta-conjectures or principles that enable us topredict where the line between easiness and hardness will be drawn in each case.We can study the truth of such conjectures using a number of tools, including\nnot just algorithms and reductions but also integrality-gap proofs, physics-style\nanalysis of algorithms, worst-case hardness-of-approximation results, and actualcomputational experiments.\nAs for the truth of Feige’s Hypothesis itself, while it would be premature to use\nan FH-based encryption to protect state secrets, I think the current (admittedlyinconclusive) evidence points in the direction of the hypothesis being true. It def-\ninitely seems as if refuting FH would require a new and exciting algorithmic idea.\nWith time, if Feige’s Hypothesis receives the attention it deserves then we can getmore confidence in its veracity, or learn more about algorithms for average-caseinstances.\nParting thoughts\nTheoretical Computer Science is sometimes criticized for its reliance on unprovenassumptions, but I think we’ll need many more of those if we want to get furtherinsights into areas such as average-case complexity. Sure, this means we have to\nBEATCS no 108 THE EA TCS COLUMNS\n140live with possibility that our assumptions turn out to be false, just as physicists\nhave to live with the possibility that future experiments might require a revisionof the laws of nature. But that doesn’t mean that we should let unconstructiveskepticism paralyze us. It would be good if our field had more explicit discussionof what kinds of results can serve as evidence for the hardness or easiness of acomputational problem. I deliberately chose two questions whose answer is yetunclear, and for which there is reasonable hope that we’ll learn new insights inthe coming years that may upend current beliefs. I hope that as such results cometo light, we can reach a better understanding of how we can predict the answer toquestions for which we have yet no proofs.\nReferences\n[1] Noga Alon, Sanjeev Arora, Rajsekar Manokaran, Dana Moshkovitz, and\nOmri Weinstein. Inapproximability of densest k-subgraph from average case\nhardness. Manuscript, available at www.cs.princeton.edu/~rajsekar/\npapers/dks.pdf , 2011.\n[2] Scott Aaronson. Has there been progress on the P vs. NP question?, 2010.\nPresentation at MIT CSAIL student workshop. Slides available at http://\nwww.scottaaronson.com/talks/pvsnpcsw.ppt .\n[3] Sanjeev Arora, Boaz Barak, and David Steurer. Subexponential algorithms\nfor unique games and related problems. In FOCS , pages 563–572, 2010.\n[4] Benny Applebaum, Boaz Barak, and Avi Wigderson. Public-key cryptogra-\nphy from different assumptions. In STOC , pages 171–180, 2010.\n[5] Dimitris Achlioptas and Amin Coja-Oghlan. Algorithmic barriers from phase\ntransitions. In FOCS , pages 793–802, 2008.\n[6] Sanjeev Arora, Subhash Khot, Alexandra Kolla, David Steurer, Madhur Tul-\nsiani, and Nisheeth K. Vishnoi. Unique games on expanding constraintgraphs are easy: extended abstract. In STOC , pages 21–28, 2008.\n[7] Michael Alekhnovich. More on average case vs approximation complexity.\nComputational Complexity , 20(4):755–786, 2011. Preliminary version in\nFOCS 2003.\n[8] Boaz Barak, Fernando G. S. L. Brandão, Aram Wettroth Harrow, Jonathan A.\nKelner, David Steurer, and Yuan Zhou. Hypercontractivity, sum-of-squaresproofs, and their applications. In STOC , pages 307–326, 2012.\n[9] Aditya Bhaskara, Moses Charikar, Eden Chlamtac, Uriel Feige, and Aravin-\ndan Vijayaraghavan. Detecting high log-densities: an o(n1/4) approximation\nfor densest k-subgraph. In STOC , pages 201–210, 2010.\nThe Bulletin of the EATCS\n141[10] Aditya Bhaskara, Moses Charikar, Aravindan Vijayaraghavan, Venkatesan\nGuruswami, and Yuan Zhou. Polynomial integrality gaps for strong sdp re-\nlaxations of densest k-subgraph. In SODA , pages 388–405, 2012.\n[11] Boaz Barak, Parikshit Gopalan, Johan Håstad, Raghu Meka, Prasad\nRaghavendra, and David Steurer. Making the long code shorter. In FOCS ,\n2012.\n[12] Boaz Barak, Guy Kindler, and David Steurer. On the optimality\nof relaxations for average-case and generalized constraint satisfaction\nproblems. Manuscript, available from http://www.boazbarak.org/\nresearch.html , 2012.\n[13] Alfredo Braunstein, Marc Mézard, and Riccardo Zecchina. Survey propaga-\ntion: An algorithm for satisfiability. Random Struct. Algorithms , 27(2):201–\n226, 2005.\n[14] Boaz Barak, Prasad Raghavendra, and David Steurer. Rounding semidefinite\nprogramming hierarchies via global correlation. In FOCS , pages 472–481,\n2011.\n[15] Eden Chlamtac and Madhur Tulsiani. Convex relaxations and integrality\ngaps, 2010. Chapter in Handbook on Semidefinite, Cone and Polynomial\nOptimization.\n[16] Uriel Feige. Relations between average case complexity and approximation\ncomplexity. In STOC , pages 534–543, 2002.\n[17] Uriel Feige, David Peleg, and Guy Kortsarz. The dense k-subgraph problem.\nAlgorithmica , 29(3):410–421, 2001.\n[18] Dima Grigoriev. Linear lower bound on degrees of positivstellensatz calculus\nproofs for the parity. Theor . Comput. Sci. , 259(1-2):613–622, 2001.\n[19] Venkatesan Guruswami, Prasad Raghavendra, Rishi Saket, and Yi Wu. By-\npassing ugc from some optimal geometric inapproximability results. InSODA , pages 699–717, 2012.\n[20] Michel X. Goemans and David P. Williamson. Improved approximation al-\ngorithms for maximum cut and satisfiability problems using semidefinite pro-gramming. J. ACM , 42(6):1115–1145, 1995.\n[21] Johan Håstad. Some optimal inapproximability results. J. ACM , 48(4):798–\n859, 2001.\n[22] Russell Impagliazzo, Ramamohan Paturi, and Francis Zane. Which problems\nhave strongly exponential complexity? J. Comput. Syst. Sci. , 63(4):512–530,\n2001.\n[23] Subhash Khot. On the power of unique 2-prover 1-round games. In STOC ,\npages 767–775, 2002.\nBEATCS no 108 THE EA TCS COLUMNS\n142[24] Subhash Khot. Ruling out ptas for graph min-bisection, densest subgraph\nand bipartite clique. In FOCS , pages 136–145, 2004.\n[25] Subhash Khot and Dana Moshkovitz. Np-hardness of approximately solving\nlinear equations over reals. In STOC , pages 413–420, 2011.\n[26] Alexandra Kolla. Spectral algorithms for unique games. In IEEE Conference\non Computational Complexity , pages 122–130, 2010.\n[27] Subhash Khot, Preyas Popat, and Rishi Saket. Approximate lasserre integral-\nity gap for unique games. In APPROX-RANDOM , pages 298–311, 2010.\n[28] Subhash Khot and Rishi Saket. Sdp integrality gaps with local /lscript1-\nembeddability. In FOCS , pages 565–574, 2009.\n[29] Subhash Khot and Nisheeth K. Vishnoi. The unique games conjecture, inte-\ngrality gap for cut problems and embeddability of negative type metrics into\nl1.I n FOCS , pages 53–62, 2005.\n[30] Dana Moshkovitz and Ran Raz. Two-query pcp with subconstant error. J.\nACM , 57(5), 2010.\n[31] Prasad Raghavendra. Optimal algorithms and inapproximability results for\nevery csp? In STOC , pages 245–254, 2008.\n[32] Prasad Raghavendra and David Steurer. Integrality gaps for strong sdp relax-\nations of unique games. In FOCS , pages 575–585, 2009.\n[33] Prasad Raghavendra and David Steurer. Graph expansion and the unique\ngames conjecture. In STOC , pages 755–764, 2010.\n[34] Prasad Raghavendra, David Steurer, and Prasad Tetali. Approximations for\nthe isoperimetric and spectral profile of graphs and related parameters. In\nSTOC , pages 631–640, 2010.\n[35] Prasad Raghavendra, David Steurer, and Madhur Tulsiani. Reductions be-\ntween expansion problems. In IEEE Conference on Computational Com-\nplexity , pages 64–73, 2012.\n[36] Grant Schoenebeck. Linear level lasserre lower bounds for certain k-csps. In\nFOCS , pages 593–602, 2008.\n[37] David Steurer. Fast sdp algorithms for constraint satisfaction problems. In\nSODA , pages 684–697, 2010.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xgkvGweaBG_",
"year": null,
"venue": "Bull. EATCS 2015",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/365/347",
"forum_link": "https://openreview.net/forum?id=xgkvGweaBG_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Automata Tutor and what we learned from building an online teaching tool",
"authors": [
"Loris D'Antoni",
"Matthew Weavery",
"Alexander Weinert",
"Rajeev Alur"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEducation Column\nby\nJurajHromkovi ˇc\nDepartment of Computer Science\nETH Zürich\nUniversitätstrasse 6, 8092 Zürich, Switzerland\[email protected]\nAutomata Tutor and what we learned from\nbuilding an online teaching tool\nLoris D’Antoniy, Matthew Weavery\nAlexander Weinertz, Rajeev Alury\ny: University of Pennsylvania z: RWTH Aachen University\nAbstract\nAutomata Tutor is an online tool that helps students learn basic con-\ncepts in theory of computation, such as finite automata and regular expres-\nsions. The tool provides personalized feedback when students submit incor-\nrect solutions, and also helps teachers managing large classes by automat-\nically grading homework assignments. A utomata Tutor has already been\nused by more than 2,000 students at 12 di \u000berent Universities in 4 di \u000berent\ncontinents.\nIn this paper, we summarize our experience in building such a system.\nWe describe the algorithms that are used to produce personalized feedback,\nand then evaluate the tool and its features through extensive user studies\ninvolving hundreds of participants.\n1 Introduction\nBoth online and o \u000fine, student enrollment in Computer Science courses is rapidly\nincreasing. For example, enrollment in introductory CS courses has roughly\ntripled at UC Berkeley, Stanford and the University of Washington in the past\ndecade [12]. In addition, Computer Science is the most frequently taken MOOC\nsubject online [17, 10]. With roughly a thousand students in a lecture hall, or tens\nof thousands following a MOOC online, approaches like manual grading and in-\ndividual tutoring do not scale, yet students still need appropriate guidance through\nspecific feedback to progress and to overcome conceptual di \u000eculties. Many tu-\ntoring systems have been proposed to assist students learning various aspects of\ncomputer science such as LISP programming [3] and SQL queries [11]; however,\nthese tools are typically not able to grade the student solution or provide action-\nable feedback. Recently there has been work on applying computing power to\nteaching tasks including grading programming problems [15] as well as generat-\ning and grading geometric constructions problems [8, 9]. The tool we describe in\nthis paper falls in this line of work.\nAutomata Tutor is an online tool ( http://www.automatatutor.com ) that\nhelps students learn basic concepts in theory of computation, such as finite au-\ntomata and regular expressions. The tool provides personalized feedback when\nstudents submit incorrect solutions, and also helps teachers managing large classes\nby automatically grading homework assignments. The techniques we use to pro-\nduce these grades and feedback messages are based on algorithmic techniques that\nare grounded in program synthesis, logic, and the theory of formal languages [2,\n6]. Real courses at 14 di \u000berent universities with more than 3,000 students in 4\ndi\u000berent continents have already used A utomata Tutor .\nIn this paper we summarize our experience in building such a system. We\nstart by describing the features A utomata Tutor o\u000bers to help both students and\ninstructors (Sec. 2), and provide a brief history of how the tool became what it\nis now (Sec. 3). We then describe the experiments, surveys, and user studies that\nwe ran to assess the overall user satisfaction and the quality of grades, feedback\nmessages, and user interfaces (Sec. 4). Finally, we discuss what we learned from\nthe data we collected, and from three years of experience building the tool (Sec. 5).\nOther automata theory education tools There are several strategies for teach-\ning automata and other formalisms in computer science education. Our system\nis the first online tool that is able to grade students’ solutions and can provide\nactionable feedback rather than just counterexamples.\nThe other notable tools for teaching DFA constructions are JFLAP and Gradi-\nance. JFLAP [13] allows students to author and simulate automata and is widely\nused in classrooms. Instructors can test student models against a set of input\nstrings and expected results. Recently JFLAP has been equipped with an interface\nthat allows students to test their solution DFA on problems for which they only\nhave an English description of the language [14]. To do this the student writes an\nimperative program that matches the given language description. If this program\nis not equivalent to the student’s DFA JFLAP automatically produces a coun-\nterexample. Gradiance1is a learning environment for database, programming and\nautomata concepts. It focuses on providing tests based on multiple choice ques-\ntions. These tools either do not support a way for drawing DFAs, or do not have\na high-level representation of a problem and can therefore not provide grades and\nfeedback about the conceptual problems with a student’s submission.\nAdditional tools are available for problems related to DFA constructions. In\nProofChecker [16], students prove the correctness of a DFA by labelling the lan-\n1http://www.newgradiance.com/\nFigure 1: A student’s view at an attempt at solving a DFA construction problem\nguage described by each state: given a DFA the student enters “state conditions”\n(functions or regular expressions) describing the language of each individual state.\nThe system then tests these conditions against a finite set of strings. In Dedu-\nceIt [7], students solve assignments on logical derivations. DeduceIt is able to\nthen grade such assignments and provide incremental feedback. Visualizations\nsuch as animations of algorithms or depictions of transformations between au-\ntomata and equivalent regular expressions exist [4]. These tools do not support\ncourse management, grading, and typically provide only counterexample based\nfeedback. Moreover, the support for most of these tools has been discontinued\nand they are not available to the public anymore.\nTo the best of our knowledge, A utomata Tutor is the first tool for teaching\nformal languages that includes course management, grading, and feedback gener-\nation. Moreover, the features of A utomata Tutor have been thoroughly evaluated\nusing multiple experiments and user studies [2, 6].\n2 A utomata Tutor in a nutshell\nAutomata Tutor is an online education tool created to help students learn basic\nconcepts in theory of computation. In particular, it provides an interface for stu-\ndents to draw the corresponding DFA or NFA to a given description, and receive\ninstantaneous feedback about their submission. The tool also supports regular\nexpressions and NFA-to-DFA constructions. In this section we focus on how A u-\ntomata Tutor is helpful to both teachers and their students.\n2.1 A tool for students\nTwice ab:Construct a DFA over the alphabet fa;bgthat recognizes\nall strings in which abappears exactly twice\nIncorrect: Your solution accepts the following set of strings: Grade:\bsjabappears in sat least twice\t6/10Problem Description\n1stAttempt\nFeedback\nFigure 2: A student’s first attempt at solving a given problem. The user receives\npersonalized feedback.\nAutomata Tutor o\u000bers a structured, easy to use interface for students to prac-\ntice drawing automata matching a particular description, as shown in Figure 1.\nWhile allowing students to quickly draw any automaton, the tool also enforces\nthat all automata are legal, helping students better understand the concepts. For\nexample, when a student adds a new node to a DFA, edges from the node for each\nsymbol in the alphabet are added automatically and cannot be deleted.\nUpon submitting an automaton, A utomata Tutor provides students with in-\nstantaneous feedback to help them understand and fix their mistakes. Consider a\nstudent attempting to draw a DFA accepting the language of all strings in which\n“ab” appears exactly twice. If the student draws and submits an automaton that\naccepts a closely related language, such as the language of all strings in which\nIncorrect: You need to change the acceptance condition of one state Grade:\n9/102ndAttempt\nFeedback\nFigure 3: The student’s second attempt at solving the problem. They are given a\nhint at how to change their automaton to the correct one.\nCorrect! Grade:\n10/103rdAttempt\nFeedback\nFigure 4: The student’s final attempt at solving the problem. This attempt pro-\nduces the correct automaton.\n“ab” appears at least twice, the tool provides a hint describing the di \u000berence, as\nis done in Figure 2. After receiving the feedback message, the student can submit\na new attempt (Figure 3). In this case, the student’s submissions is structurally\nvery similar to a correct automaton, and the tool suggests how the student might\nmodify the automaton. If the student submits a correct automaton, as shown in\nFigure 4, the tool assigns full score. Lastly, the tool can also provide a counterex-\nample string on which the student’s automaton behaves incorrectly. [6] provides\na thorough explanation about feedback for DFA construction problems.\nFor NFA construction problems, the tool provides counterexample feedback\non incorrect automata. In addition, it gives students feedback if they did not take\nadvantage of nondeterminism or if they have not given the minimal automaton, an\nexample being “There exists an equivalent NFA with 1 fewer states and 2 fewer\ntransitions.”\n2.2 A tool for teachers\nAutomata Tutor automates grading in a fair way that is comparable to grades\nassigned by a human grader, saving teachers time and energy. To calculate grades\nfor a student’s automaton, the tool estimates the percentage of strings on which\nthe automaton fails and calculates the minimum number of syntactic edits the\nsubmission needs to accept the correct language. For DFA submissions, a distance\nis also calculated between the language of the student’s automaton and the correct\nlanguage. [2] describes the grading algorithm for DFAs in more detail. For NFA\nsubmissions accepting the correct language, the tool also takes into account how\nmuch larger the student’s submission is than the solution.\nThe tool is also flexible, allowing teachers to specify their own assignments\nand assign them to their students. Instructors can create a course, and have their\nstudents register for it by distributing its course ID and password. They can then\nFigure 5: An instructor’s view at one of their courses\ncreate problem sets for the course with their own descriptions and solutions, and\ncan choose to specify a limit to the number of attempts each student has for each\nproblem. Ultimately, teachers can download the grade information for their stu-\ndents’ submissions. The course management interface is shown in Figure 5.\n3 The evolution of A utomata Tutor\nIn its three years of existence, A utomata Tutor has gone from a prototype, which\nwas never used in real classrooms, to a tool that is widely accepted and used in\nmultiple universities all over the world. We give an overview over the devel-\nopment history of the tool in this section and show the driving forces and ideas\nbehind its development.\n3.1 The precursor\nA basic version of A utomata Tutor was built to test the algorithms presented\nin [5]. This paper presented a logic for describing languages that could also be\nnonregular, and the algorithms presented in [5] could prove or disprove regularity\nfor many such languages. In this version of A utomata Tutor , an administrator\ncould use such a logic to pose automata constructions problems. Students could\nthen solve the problems and receive a counterexample when the solutions they\nsubmitted was incorrect.\nLimitations of the precursor Since the goal of the tool at this point was that\nof evaluating algorithms, the interface was still immature and drawing automata\nwas cumbersome. Moreover, the feedback was limited to counterexamples. Due\nto these factors, this version of the tool was not deployed in real classes.\n3.2 A utomata Tutor 1.0\nThe precursor of A utomata Tutor just provided students with a counterexample if\ntheir attempt at solving a problem was incorrect. Our goal in developing A utomata\nTutor 1.0 was to also provide students with a grade for their attempt as well as\nwith feedback about how to improve and correct their automaton. In order to do\nso, the tool compared the student’s attempt with the solution using three di \u000berent\nmetrics. These metrics are outlined in [2] and further discussed in Section 4.1.\nIn addition to a numerical grade the tool provided actionable feedback expressed\nin plain English. We briefly described techniques used to produce the feedback\nin Section 2, and a study of the e \u000bectiveness of feedback messages is presented\nin [6].\nAutomata Tutor 1.0 only supported DFA construction problems. In this type\nof problem, the students are given a problem statement of the form “Construct a\nDFA that accepts all the strings in the language . . . ” The students could construct\na DFA graphically in their browser and submit their attempt to the system. The\nsystem would then provide a grade and a feedback message.\nIn order to split the load as well as separate the concerns of displaying prob-\nlems to the user and the actual computation, we split A utomata Tutor 1.0 into two\ncomponents: a web-facing frontend built in Scala and Lift, and a backend built\nusing C# and ASP.NET. Another benefit of this architecture was that it would be\npossible to support other kinds of problems using the same frontend. Even though\nthis did not happen in A utomata Tutor 1.0, this architecture would prove very\nuseful when building the successor, A utomata Tutor 2.0. A typical communica-\ntion between the frontend and the backend is shown in Fig. 6.\nLimitations of A utomata Tutor 1.0 Although A utomata Tutor 1.0 served its\npurpose of evaluating the techniques from [2] very well, there were still some\nlimitations, some of which were technical, while others were conceptual. The\ntechnical problems included the fact that the tool relied on features that were only\nsupported by the Google Chrome browser. Additionally, although the interface\nFigure 6: Typical workflow of a student solving a problem\nfor students to attempt the posed problems was improved from the one present in\nthe previous version of the tool, it was still inconvenient to use.\nChief among the conceptual shortcomings was the fact that DFA construction\nproblems were the only kind of supported problems. Whereas the separation of\nfront- and backend already laid the foundation for the extension to other kinds of\nproblems, it would have been non-trivial to extend the frontend to handle these\nnew problem types. Also, using the tool required strong collaboration between\ninstructors and administrators, as there was no way to create courses, and assign\nproblems only to certain groups of students. These limitations were adressed\nduring the development of A utomata Tutor 2.0.\n3.3 A utomata Tutor 2.0\nAfter we deployed and successfully used A utomata Tutor 1.0 in 3 courses at 3\nuniversities with about 400 students, the tool’s limitations became apparent. We\ndecided to reimplement the frontend with a heavier focus on scalability in terms\nof involvement of the administrators, and flexibility in terms of creating di \u000berent\ntypes of problems.\nWe addressed the former problem by implementing course management and\nassigning users one of two roles: students, instructors. The frontend now has a\nconcept of courses, problems and problem sets. While students can only enroll\nin courses and solve the problems posed in these courses, instructors are allowed\nto create courses and problems, and collect grades at the end of a course. A\nscreenshot of the interface for course management can be seen in Fig. 5.\nAutomata Tutor 2.0 also allows for easy extensibility to handle new problem\ntypes. A developer simply has to provide views that allow instructors to create,\nedit and delete a problem as well as a view that allows a student to solve a prob-\nlem. Furthermore, the developer will have to implement a web-service o \u000bering a\ngrading engine.\nWe implemented three new problem types in addition to the already existing\nDFA construction problems using this abstraction. Version 2.0 of A utomata Tu-\ntorthus supports the construction of DFAs, NFAs, and regular expressions from\na description of a regular language in plain English, as well as the construction\nof a DFA that is equivalent to a given NFA. Since all these formalisms describe\nthe set of regular languages, we could reuse parts of the existing grading and the\nfeedback engines.\nThese changes were widely accepted and are now in constant use. Instructors\ndeeply appreciated the possibility to manage their courses themselves instead of\nhaving to work with the administrators. We rolled the new frontend out at 14\nuniversities in four continents. It is used by 3,000 students and has already graded\nmore than 40,000 solutions.\nLimitations of A utomata Tutor 2.0 Although the tool is being appreciated by\nthe community, there is still room for many improvements. In particular, in its cur-\nrent version, the tool only supports DFA, NFA, and regular expressions problems.\nWe discuss some future directions in Sec. 5.\n4 Experience report\nThroughout the process of developing A utomata Tutor , we have conducted a\nnumber of a studies to test the tool’s success at helping students learn to construct\nautomata. We summarize what learned from them in this section.\n4.1 Results about automatic grading\nThrough a number of user studies involving over 500 students at three universi-\nties2, we used students’ responses on a 5 point Likert scale to measure\n\u000fhow fair students feel the grades assigned by the tool are;\n\u000fhow meaningful the grades assigned by the tool are.\n2University of Illinois Urbana-Champaign, University of Pennsylvania, Reykjavik University\nThe general consensus is that students find that the partial grades assigned by\nthe tool are both fair and meaningful. For DFA submissions, we additionally\ncompared grades assigned by the tool to those given by human graders, and found\nthey are comparable, although the tool is more consistent at assigning the same\ngrade to the same solution. [2] contains a more detailed discussion on automatic\ngrading in A utomata Tutor .\n4.2 Results about feedback\nThe user studies all agreed that, when it comes to feedback, simpler is better. We\nmeasured this by splitting students into three groups each receiving a di \u000berent\ntype of feedback: binary feedback (yes /no), counterexamples, and plain English\nhints. Afterwards, we had the students fill out a survey with a 5 point Likert scale\nasking about\n\u000fhow useful is the feedback;\n\u000fhow helpful is the feedback for understanding mistakes;\n\u000fhow helpful is the feedback for getting the correct solution;\n\u000fhow confusing is the feedback.\nWhile too much feedback may be detrimental to students, having no feedback is\nworse; students who were only told if their submission was correct or not were\nslower at solving problems and did fewer practice problems than those receiving\nfeedback. [6] provides a comprehensive report on this user study.\nFor feedback pertaining the size of a student’s NFA, there was no improve-\nment in performance from sharing how many fewer states and transitions were\nin the solution than just showing “A smaller NFA exists.” Interestingly, in both\ncases, about half of students’ subsequent submission no longer accepted the cor-\nrect language.\n4.3 Results about usability\nWe compared the user surveys with the old (1.0) and new (2.0) drawing interfaces\nand measured\n\u000fhow easy students thought it was to draw using the interface;\n\u000fhow predictable the behavior of the interface was.\nWe used a 5 point Likert scale for each metric, concluding that the new interface\nis significantly easier to use, and is significantly better at behaving as users ex-\npected.To both questions, the median student response for the new interface was\na 5: the highest score. This is particularly meaningful, as a tutorial accompanied\nthe old interface while no instructions are provided for the new interface.\n4.4 What we learned from the instructors\nWe summarize the key (subjective) observations by the three instructors who have\ntaught theory of computation courses multiple times before and have used A u-\ntomata Tutor during our experiments. First, the requirement that the homework\nhad to be submitted using the tutoring tool ensured students’ participation. Once\nstudents started interacting with the software, they were very much engaged with\nthe course material. Second, the average grade on the homework assignments of-\nfered in the course increased when using A utomata Tutor . Third, the teaching\nassistants were very happy that the tool did the grading for them. Lastly, while\nnot profound, we learned that instructors enjoy the tool and want more of this type\nof work: “This is how the construction of finite automata that recognize regular\nlanguages should be taught in a modern way! I wish I had similar tools for all the\ntopics I need to cover” [1].\n5 Discussion and future work\nIn its three years of life A utomata Tutor has seen many updates and was adopted\nby more than 3,000 users. In this section we discuss the lessons we learned from\nour experiments and show how these lessons can be applied to further improv-\ning the tool. We first present what features were well-received by instructors and\nstudents before we discuss features that were not well-received and therefore re-\nmoved from the tool. Finally, we show how we plan to extend A utomata Tutor to\nsupport new problems and attract more users in the future.\n5.1 What worked\nBased on our surveys and experiments we observed the following:\nInterfaces are important: Although this is known for many other domains, it\nis particularly important in the context of education. When students are\nalready struggling to find the right way to approach an homework problem,\na non-intuitive drawing interface can cause harm.\nSimple feedback is good enough: Based on the experiments discussed in Sec-\ntion 4 and in [6], it is clear that almost any type of feedback is e \u000bective and\nimproves the students’ learning experience. The feedback messages have to\nbe clear and concise: counterexamples, simple edits, etc.\nInstructors like independence: In the earlier versions we allowed instructors to\ncontact us to set up courses, which was too complicated for them. Enabling\ncourse management in A utomata Tutor 2.0 allowed us to gain a large user\nbase. Since the introduction of course management, 10 new universities\nstarted using the tool in real classes.\nInstructors love automated grading: This is not surprising, but it is probably the\nfeature that is most responsible for the success of the tool. A utomata Tutor\nhas been deployed in classes with more than 200 students for which manu-\nally grading an homework would take more than fifty man-hours. In Luca\nAceto’s words: “From my perspective (and from that of my TAs), automatic\ngrading is a real bonus. I love to teach, but I really hate to grade a large\nnumber of student assignments. ” Up to today A utomata Tutor has graded\nmore than 50,000 student submission;\nEnd of course surveys: These are really helpful in assessing the features that\ncause confusion and those that actually help the students. Quantitative ques-\ntions which ask for overall user satisfaction are helpful in assessing the value\nof the tool. Open ended questions which ask for suggestions and opinions\ncan guide on what features should be added or removed.\n5.2 What did not work\nBased on our surveys and experiments we observed that:\nVerbose feedback is confusing: During the user study presented in [6] we found\nthat longer feedback messages were confusing and were actually causing\nfrustration among the students. We therefore removed verbose feedback\nmessages and replaced them with counterexamples.\nLong solution-oriented feedback: If the solution is far from a correct one it\nis better to simply tell the student that the solution is incorrect rather than\nproviding a hint on how to fix it. In particular we observed that long edit\nscripts are confusing.\nA single crash can cost many users: Especially in the domain of education, where\nstudent rarely do homework assignments, it is important to provide a robust\ntool on a robust server.\n5.3 The final goal\nIn the next years we want to add more features to A utomata Tutor and be able to\nfully support students learning basic theory of computation concepts. Concretely\nwe plan to add:\nRegular expressions: Although the tool supports them, the grading features are\ncurrently very basic. Defining new metrics and feedback that are tailored\nfor regular expressions is part of our agenda.\nAutomata meta-constructions: An example problem would be: Given two DFAs\n(Q1;q1\n0;\u000e1;F1)and(Q2;q2\n0;\u000e2;F2)define their intersection. Such a feature\nrequires a language that is able to symbolically manipulate the objects in the\ntwo automata signatures. Integrating such a language with a theorem prover,\nlike Coq, might allow us to e \u000bectively grade these complex assignments.\nProofs of non-regularity: These are an important concept on which students often\nstruggle. Some initial attempts at this problem can be found in [5], but\nit is still not clear how to build a user-interaction model that can produce\nfeedback and grades, in particular for proofs based on the pumping lemma.\nProof of DFA correctness: Students need to know how to characterize the lan-\nguages described by each state of an automaton in order to prove by induc-\ntion that the DFA correctly accepts a target language. The logic presented\nin [2] could be used to allow the students to enter such descriptions. A\ninformal attempt to solve this problem is presented in [16].\nContext-free languages: It is not clear how to adapt our current methods for\ngrading and feedback to non-regular languages. Even though equivalence\nof these languages is undecidable, there may be algorithms that work for\nsmall solutions.\nTuring machines: Adding grading for Turing machines would be the next sttep\nafter the grading of context-free languages. This poses the same questions\nas the previous case.\nMOOC deployment: We would like to deploy A utomata Tutor in a real MOOC.\nThis would allow us to leverage a large user base and learn more about the\ntool capabilities from the MOOC’s forum.\n6 Conclusions\nWe presented A utomata Tutor , an online tool that is already being used by 13\nuniversities around the world to teach basic concepts in theory of computation.\nAutomata Tutor is available at http://www.automatatutor.com . It allows\ninstructors to manage courses, and it can currently provide the students with\ngrades and personalized feedback for DFA, NFA, NFA-to-DFA, and regular ex-\npressions constructions. We discussed how the tool evolved and pointed out the\ndriving features behind its success: simple and clear feedback messages, consis-\ntent grades, intuitive drawing interface, and course management for instructors.\nOur ultimate goal is to extend A utomata Tutor to support most undergraduate-\nlevel concepts in theory of computation such as proofs of non-regularity, automata\nmeta-constructions, and context-free grammars.\nAcknowledgements We would like to thank Luca Aceto for his invaluable sup-\nport, feedback, and availability. Not only Luca helped us improving the tool, but\nhe also advertised it to the community. This research is partially supported by\nNSF Expeditions in Computing awards CCF 1138996.\nReferences\n[1] Luca Aceto. Ode to the automata tutor, 2015.\n[2] Rajeev Alur, Loris D’Antoni, Sumit Gulwani, Dileep Kini, and Mahesh\nViswanathan. Automated grading of dfa constructions. In Proceedings of the\nTwenty-Third International Joint Conference on Artificial Intelligence , IJCAI ’13,\npages 1976–1982. AAAI Press, 2013.\n[3] John R. Anderson and Brian J. Reiser. The LISP tutor: it approaches the e \u000bective-\nness of a human tutor. BYTE , 10(4):159–175, April 1985.\n[4] Beatrix Braune, Stephan Diehl, Andreas Kerren, and Reinhard Wilhelm. Animation\nof the generation and computation of finite automata for learning software. In Oliver\nBoldt and Helmut JÃijrgensen, editors, Automata Implementation , number 2214 in\nLecture Notes in Computer Science, pages 39–47. Springer Berlin Heidelberg, Jan-\nuary 2001.\n[5] Pavol Cern `y, Sumit Gulwani, Thomas A Henzinger, Arjun Radhakrishna, and\nDamien Zu \u000berey. Specification, verification and synthesis for automata problems.\n2012.\n[6] Loris D’antoni, Dileep Kini, Rajeev Alur, Sumit Gulwani, Mahesh Viswanathan,\nand Björn Hartmann. How can automatic feedback help students construct au-\ntomata? ACM Trans. Comput.-Hum. Interact. , 22(2):9:1–9:24, March 2015.\n[7] Ethan Fast, Colleen Lee, Alex Aiken, Michael Bernstein, Daphne Koller, and Eric\nSmith. Crowd-scale interactive formal reasoning and analytics. In Proceedings of\nUIST’13 , 2013.\n[8] Sumit Gulwani, Vijay Anand Korthikanti, and Ashish Tiwari. Synthesizing geome-\ntry constructions. SIGPLAN Not. , 46(6):50–61, June 2011.\n[9] Shachar Itzhaky, Sumit Gulwani, Neil Immerman, and Mooly Sagiv. Solving ge-\nometry problems using a combination of symbolic and numerical reasoning. In Ken\nMcMillan, Aart Middeldorp, and Andrei V oronkov, editors, Logic for Programming,\nArtificial Intelligence, and Reasoning , volume 8312 of Lecture Notes in Computer\nScience , pages 457–472. Springer Berlin Heidelberg, 2013.\n[10] Katy Jordan. Mooc completion rates: The data. 2014.\n[11] Antonija Mitrovic. Learning SQL with a computerized tutor. In Proceedings of\nSIGCSE’98 , pages 307–311, New York, NY , USA, 1998. ACM.\n[12] David Patterson. Why are english majors studying computer science? 2013.\n[13] Susan H. Rodger and Thomas Finley. JFLAP - An Interactive Formal Languages\nand Automata Package . Jones and Bartlett, 2006.\n[14] V .S. Shekhar, A. Agarwalla, A. Agarwal, B. Nitish, and V . Kumar. Enhancing\nJFLAP with automata construction problems and automated feedback. In Contem-\nporary Computing (IC3), 2014 Seventh International Conference on , pages 19–23,\nAug 2014.\n[15] Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback\ngeneration for introductory programming assignments. In Proceedings of PLDI’13 ,\npages 15–26, New York, NY , USA, 2013. ACM.\n[16] Matthias F. Stallmann, Suzanne P. Balik, Robert D. Rodman, Sina Bahram,\nMichael C. Grace, and Susan D. High. Proofchecker: an accessible environment\nfor automata theory correctness proofs. SIGCSE Bull. , 39(3):48–52, June 2007.\n[17] New York Times. Instruction for masses knocks down campus walls. 2012.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DvFUU76OuH8",
"year": null,
"venue": "Bull. EATCS 2015",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/365/347",
"forum_link": "https://openreview.net/forum?id=DvFUU76OuH8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Automata Tutor and what we learned from building an online teaching tool",
"authors": [
"Loris D'Antoni",
"Matthew Weavery",
"Alexander Weinert",
"Rajeev Alur"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheEducation Column\nby\nJurajHromkovi ˇc\nDepartment of Computer Science\nETH Zürich\nUniversitätstrasse 6, 8092 Zürich, Switzerland\[email protected]\nAutomata Tutor and what we learned from\nbuilding an online teaching tool\nLoris D’Antoniy, Matthew Weavery\nAlexander Weinertz, Rajeev Alury\ny: University of Pennsylvania z: RWTH Aachen University\nAbstract\nAutomata Tutor is an online tool that helps students learn basic con-\ncepts in theory of computation, such as finite automata and regular expres-\nsions. The tool provides personalized feedback when students submit incor-\nrect solutions, and also helps teachers managing large classes by automat-\nically grading homework assignments. A utomata Tutor has already been\nused by more than 2,000 students at 12 di \u000berent Universities in 4 di \u000berent\ncontinents.\nIn this paper, we summarize our experience in building such a system.\nWe describe the algorithms that are used to produce personalized feedback,\nand then evaluate the tool and its features through extensive user studies\ninvolving hundreds of participants.\n1 Introduction\nBoth online and o \u000fine, student enrollment in Computer Science courses is rapidly\nincreasing. For example, enrollment in introductory CS courses has roughly\ntripled at UC Berkeley, Stanford and the University of Washington in the past\ndecade [12]. In addition, Computer Science is the most frequently taken MOOC\nsubject online [17, 10]. With roughly a thousand students in a lecture hall, or tens\nof thousands following a MOOC online, approaches like manual grading and in-\ndividual tutoring do not scale, yet students still need appropriate guidance through\nspecific feedback to progress and to overcome conceptual di \u000eculties. Many tu-\ntoring systems have been proposed to assist students learning various aspects of\ncomputer science such as LISP programming [3] and SQL queries [11]; however,\nthese tools are typically not able to grade the student solution or provide action-\nable feedback. Recently there has been work on applying computing power to\nteaching tasks including grading programming problems [15] as well as generat-\ning and grading geometric constructions problems [8, 9]. The tool we describe in\nthis paper falls in this line of work.\nAutomata Tutor is an online tool ( http://www.automatatutor.com ) that\nhelps students learn basic concepts in theory of computation, such as finite au-\ntomata and regular expressions. The tool provides personalized feedback when\nstudents submit incorrect solutions, and also helps teachers managing large classes\nby automatically grading homework assignments. The techniques we use to pro-\nduce these grades and feedback messages are based on algorithmic techniques that\nare grounded in program synthesis, logic, and the theory of formal languages [2,\n6]. Real courses at 14 di \u000berent universities with more than 3,000 students in 4\ndi\u000berent continents have already used A utomata Tutor .\nIn this paper we summarize our experience in building such a system. We\nstart by describing the features A utomata Tutor o\u000bers to help both students and\ninstructors (Sec. 2), and provide a brief history of how the tool became what it\nis now (Sec. 3). We then describe the experiments, surveys, and user studies that\nwe ran to assess the overall user satisfaction and the quality of grades, feedback\nmessages, and user interfaces (Sec. 4). Finally, we discuss what we learned from\nthe data we collected, and from three years of experience building the tool (Sec. 5).\nOther automata theory education tools There are several strategies for teach-\ning automata and other formalisms in computer science education. Our system\nis the first online tool that is able to grade students’ solutions and can provide\nactionable feedback rather than just counterexamples.\nThe other notable tools for teaching DFA constructions are JFLAP and Gradi-\nance. JFLAP [13] allows students to author and simulate automata and is widely\nused in classrooms. Instructors can test student models against a set of input\nstrings and expected results. Recently JFLAP has been equipped with an interface\nthat allows students to test their solution DFA on problems for which they only\nhave an English description of the language [14]. To do this the student writes an\nimperative program that matches the given language description. If this program\nis not equivalent to the student’s DFA JFLAP automatically produces a coun-\nterexample. Gradiance1is a learning environment for database, programming and\nautomata concepts. It focuses on providing tests based on multiple choice ques-\ntions. These tools either do not support a way for drawing DFAs, or do not have\na high-level representation of a problem and can therefore not provide grades and\nfeedback about the conceptual problems with a student’s submission.\nAdditional tools are available for problems related to DFA constructions. In\nProofChecker [16], students prove the correctness of a DFA by labelling the lan-\n1http://www.newgradiance.com/\nFigure 1: A student’s view at an attempt at solving a DFA construction problem\nguage described by each state: given a DFA the student enters “state conditions”\n(functions or regular expressions) describing the language of each individual state.\nThe system then tests these conditions against a finite set of strings. In Dedu-\nceIt [7], students solve assignments on logical derivations. DeduceIt is able to\nthen grade such assignments and provide incremental feedback. Visualizations\nsuch as animations of algorithms or depictions of transformations between au-\ntomata and equivalent regular expressions exist [4]. These tools do not support\ncourse management, grading, and typically provide only counterexample based\nfeedback. Moreover, the support for most of these tools has been discontinued\nand they are not available to the public anymore.\nTo the best of our knowledge, A utomata Tutor is the first tool for teaching\nformal languages that includes course management, grading, and feedback gener-\nation. Moreover, the features of A utomata Tutor have been thoroughly evaluated\nusing multiple experiments and user studies [2, 6].\n2 A utomata Tutor in a nutshell\nAutomata Tutor is an online education tool created to help students learn basic\nconcepts in theory of computation. In particular, it provides an interface for stu-\ndents to draw the corresponding DFA or NFA to a given description, and receive\ninstantaneous feedback about their submission. The tool also supports regular\nexpressions and NFA-to-DFA constructions. In this section we focus on how A u-\ntomata Tutor is helpful to both teachers and their students.\n2.1 A tool for students\nTwice ab:Construct a DFA over the alphabet fa;bgthat recognizes\nall strings in which abappears exactly twice\nIncorrect: Your solution accepts the following set of strings: Grade:\bsjabappears in sat least twice\t6/10Problem Description\n1stAttempt\nFeedback\nFigure 2: A student’s first attempt at solving a given problem. The user receives\npersonalized feedback.\nAutomata Tutor o\u000bers a structured, easy to use interface for students to prac-\ntice drawing automata matching a particular description, as shown in Figure 1.\nWhile allowing students to quickly draw any automaton, the tool also enforces\nthat all automata are legal, helping students better understand the concepts. For\nexample, when a student adds a new node to a DFA, edges from the node for each\nsymbol in the alphabet are added automatically and cannot be deleted.\nUpon submitting an automaton, A utomata Tutor provides students with in-\nstantaneous feedback to help them understand and fix their mistakes. Consider a\nstudent attempting to draw a DFA accepting the language of all strings in which\n“ab” appears exactly twice. If the student draws and submits an automaton that\naccepts a closely related language, such as the language of all strings in which\nIncorrect: You need to change the acceptance condition of one state Grade:\n9/102ndAttempt\nFeedback\nFigure 3: The student’s second attempt at solving the problem. They are given a\nhint at how to change their automaton to the correct one.\nCorrect! Grade:\n10/103rdAttempt\nFeedback\nFigure 4: The student’s final attempt at solving the problem. This attempt pro-\nduces the correct automaton.\n“ab” appears at least twice, the tool provides a hint describing the di \u000berence, as\nis done in Figure 2. After receiving the feedback message, the student can submit\na new attempt (Figure 3). In this case, the student’s submissions is structurally\nvery similar to a correct automaton, and the tool suggests how the student might\nmodify the automaton. If the student submits a correct automaton, as shown in\nFigure 4, the tool assigns full score. Lastly, the tool can also provide a counterex-\nample string on which the student’s automaton behaves incorrectly. [6] provides\na thorough explanation about feedback for DFA construction problems.\nFor NFA construction problems, the tool provides counterexample feedback\non incorrect automata. In addition, it gives students feedback if they did not take\nadvantage of nondeterminism or if they have not given the minimal automaton, an\nexample being “There exists an equivalent NFA with 1 fewer states and 2 fewer\ntransitions.”\n2.2 A tool for teachers\nAutomata Tutor automates grading in a fair way that is comparable to grades\nassigned by a human grader, saving teachers time and energy. To calculate grades\nfor a student’s automaton, the tool estimates the percentage of strings on which\nthe automaton fails and calculates the minimum number of syntactic edits the\nsubmission needs to accept the correct language. For DFA submissions, a distance\nis also calculated between the language of the student’s automaton and the correct\nlanguage. [2] describes the grading algorithm for DFAs in more detail. For NFA\nsubmissions accepting the correct language, the tool also takes into account how\nmuch larger the student’s submission is than the solution.\nThe tool is also flexible, allowing teachers to specify their own assignments\nand assign them to their students. Instructors can create a course, and have their\nstudents register for it by distributing its course ID and password. They can then\nFigure 5: An instructor’s view at one of their courses\ncreate problem sets for the course with their own descriptions and solutions, and\ncan choose to specify a limit to the number of attempts each student has for each\nproblem. Ultimately, teachers can download the grade information for their stu-\ndents’ submissions. The course management interface is shown in Figure 5.\n3 The evolution of A utomata Tutor\nIn its three years of existence, A utomata Tutor has gone from a prototype, which\nwas never used in real classrooms, to a tool that is widely accepted and used in\nmultiple universities all over the world. We give an overview over the devel-\nopment history of the tool in this section and show the driving forces and ideas\nbehind its development.\n3.1 The precursor\nA basic version of A utomata Tutor was built to test the algorithms presented\nin [5]. This paper presented a logic for describing languages that could also be\nnonregular, and the algorithms presented in [5] could prove or disprove regularity\nfor many such languages. In this version of A utomata Tutor , an administrator\ncould use such a logic to pose automata constructions problems. Students could\nthen solve the problems and receive a counterexample when the solutions they\nsubmitted was incorrect.\nLimitations of the precursor Since the goal of the tool at this point was that\nof evaluating algorithms, the interface was still immature and drawing automata\nwas cumbersome. Moreover, the feedback was limited to counterexamples. Due\nto these factors, this version of the tool was not deployed in real classes.\n3.2 A utomata Tutor 1.0\nThe precursor of A utomata Tutor just provided students with a counterexample if\ntheir attempt at solving a problem was incorrect. Our goal in developing A utomata\nTutor 1.0 was to also provide students with a grade for their attempt as well as\nwith feedback about how to improve and correct their automaton. In order to do\nso, the tool compared the student’s attempt with the solution using three di \u000berent\nmetrics. These metrics are outlined in [2] and further discussed in Section 4.1.\nIn addition to a numerical grade the tool provided actionable feedback expressed\nin plain English. We briefly described techniques used to produce the feedback\nin Section 2, and a study of the e \u000bectiveness of feedback messages is presented\nin [6].\nAutomata Tutor 1.0 only supported DFA construction problems. In this type\nof problem, the students are given a problem statement of the form “Construct a\nDFA that accepts all the strings in the language . . . ” The students could construct\na DFA graphically in their browser and submit their attempt to the system. The\nsystem would then provide a grade and a feedback message.\nIn order to split the load as well as separate the concerns of displaying prob-\nlems to the user and the actual computation, we split A utomata Tutor 1.0 into two\ncomponents: a web-facing frontend built in Scala and Lift, and a backend built\nusing C# and ASP.NET. Another benefit of this architecture was that it would be\npossible to support other kinds of problems using the same frontend. Even though\nthis did not happen in A utomata Tutor 1.0, this architecture would prove very\nuseful when building the successor, A utomata Tutor 2.0. A typical communica-\ntion between the frontend and the backend is shown in Fig. 6.\nLimitations of A utomata Tutor 1.0 Although A utomata Tutor 1.0 served its\npurpose of evaluating the techniques from [2] very well, there were still some\nlimitations, some of which were technical, while others were conceptual. The\ntechnical problems included the fact that the tool relied on features that were only\nsupported by the Google Chrome browser. Additionally, although the interface\nFigure 6: Typical workflow of a student solving a problem\nfor students to attempt the posed problems was improved from the one present in\nthe previous version of the tool, it was still inconvenient to use.\nChief among the conceptual shortcomings was the fact that DFA construction\nproblems were the only kind of supported problems. Whereas the separation of\nfront- and backend already laid the foundation for the extension to other kinds of\nproblems, it would have been non-trivial to extend the frontend to handle these\nnew problem types. Also, using the tool required strong collaboration between\ninstructors and administrators, as there was no way to create courses, and assign\nproblems only to certain groups of students. These limitations were adressed\nduring the development of A utomata Tutor 2.0.\n3.3 A utomata Tutor 2.0\nAfter we deployed and successfully used A utomata Tutor 1.0 in 3 courses at 3\nuniversities with about 400 students, the tool’s limitations became apparent. We\ndecided to reimplement the frontend with a heavier focus on scalability in terms\nof involvement of the administrators, and flexibility in terms of creating di \u000berent\ntypes of problems.\nWe addressed the former problem by implementing course management and\nassigning users one of two roles: students, instructors. The frontend now has a\nconcept of courses, problems and problem sets. While students can only enroll\nin courses and solve the problems posed in these courses, instructors are allowed\nto create courses and problems, and collect grades at the end of a course. A\nscreenshot of the interface for course management can be seen in Fig. 5.\nAutomata Tutor 2.0 also allows for easy extensibility to handle new problem\ntypes. A developer simply has to provide views that allow instructors to create,\nedit and delete a problem as well as a view that allows a student to solve a prob-\nlem. Furthermore, the developer will have to implement a web-service o \u000bering a\ngrading engine.\nWe implemented three new problem types in addition to the already existing\nDFA construction problems using this abstraction. Version 2.0 of A utomata Tu-\ntorthus supports the construction of DFAs, NFAs, and regular expressions from\na description of a regular language in plain English, as well as the construction\nof a DFA that is equivalent to a given NFA. Since all these formalisms describe\nthe set of regular languages, we could reuse parts of the existing grading and the\nfeedback engines.\nThese changes were widely accepted and are now in constant use. Instructors\ndeeply appreciated the possibility to manage their courses themselves instead of\nhaving to work with the administrators. We rolled the new frontend out at 14\nuniversities in four continents. It is used by 3,000 students and has already graded\nmore than 40,000 solutions.\nLimitations of A utomata Tutor 2.0 Although the tool is being appreciated by\nthe community, there is still room for many improvements. In particular, in its cur-\nrent version, the tool only supports DFA, NFA, and regular expressions problems.\nWe discuss some future directions in Sec. 5.\n4 Experience report\nThroughout the process of developing A utomata Tutor , we have conducted a\nnumber of a studies to test the tool’s success at helping students learn to construct\nautomata. We summarize what learned from them in this section.\n4.1 Results about automatic grading\nThrough a number of user studies involving over 500 students at three universi-\nties2, we used students’ responses on a 5 point Likert scale to measure\n\u000fhow fair students feel the grades assigned by the tool are;\n\u000fhow meaningful the grades assigned by the tool are.\n2University of Illinois Urbana-Champaign, University of Pennsylvania, Reykjavik University\nThe general consensus is that students find that the partial grades assigned by\nthe tool are both fair and meaningful. For DFA submissions, we additionally\ncompared grades assigned by the tool to those given by human graders, and found\nthey are comparable, although the tool is more consistent at assigning the same\ngrade to the same solution. [2] contains a more detailed discussion on automatic\ngrading in A utomata Tutor .\n4.2 Results about feedback\nThe user studies all agreed that, when it comes to feedback, simpler is better. We\nmeasured this by splitting students into three groups each receiving a di \u000berent\ntype of feedback: binary feedback (yes /no), counterexamples, and plain English\nhints. Afterwards, we had the students fill out a survey with a 5 point Likert scale\nasking about\n\u000fhow useful is the feedback;\n\u000fhow helpful is the feedback for understanding mistakes;\n\u000fhow helpful is the feedback for getting the correct solution;\n\u000fhow confusing is the feedback.\nWhile too much feedback may be detrimental to students, having no feedback is\nworse; students who were only told if their submission was correct or not were\nslower at solving problems and did fewer practice problems than those receiving\nfeedback. [6] provides a comprehensive report on this user study.\nFor feedback pertaining the size of a student’s NFA, there was no improve-\nment in performance from sharing how many fewer states and transitions were\nin the solution than just showing “A smaller NFA exists.” Interestingly, in both\ncases, about half of students’ subsequent submission no longer accepted the cor-\nrect language.\n4.3 Results about usability\nWe compared the user surveys with the old (1.0) and new (2.0) drawing interfaces\nand measured\n\u000fhow easy students thought it was to draw using the interface;\n\u000fhow predictable the behavior of the interface was.\nWe used a 5 point Likert scale for each metric, concluding that the new interface\nis significantly easier to use, and is significantly better at behaving as users ex-\npected.To both questions, the median student response for the new interface was\na 5: the highest score. This is particularly meaningful, as a tutorial accompanied\nthe old interface while no instructions are provided for the new interface.\n4.4 What we learned from the instructors\nWe summarize the key (subjective) observations by the three instructors who have\ntaught theory of computation courses multiple times before and have used A u-\ntomata Tutor during our experiments. First, the requirement that the homework\nhad to be submitted using the tutoring tool ensured students’ participation. Once\nstudents started interacting with the software, they were very much engaged with\nthe course material. Second, the average grade on the homework assignments of-\nfered in the course increased when using A utomata Tutor . Third, the teaching\nassistants were very happy that the tool did the grading for them. Lastly, while\nnot profound, we learned that instructors enjoy the tool and want more of this type\nof work: “This is how the construction of finite automata that recognize regular\nlanguages should be taught in a modern way! I wish I had similar tools for all the\ntopics I need to cover” [1].\n5 Discussion and future work\nIn its three years of life A utomata Tutor has seen many updates and was adopted\nby more than 3,000 users. In this section we discuss the lessons we learned from\nour experiments and show how these lessons can be applied to further improv-\ning the tool. We first present what features were well-received by instructors and\nstudents before we discuss features that were not well-received and therefore re-\nmoved from the tool. Finally, we show how we plan to extend A utomata Tutor to\nsupport new problems and attract more users in the future.\n5.1 What worked\nBased on our surveys and experiments we observed the following:\nInterfaces are important: Although this is known for many other domains, it\nis particularly important in the context of education. When students are\nalready struggling to find the right way to approach an homework problem,\na non-intuitive drawing interface can cause harm.\nSimple feedback is good enough: Based on the experiments discussed in Sec-\ntion 4 and in [6], it is clear that almost any type of feedback is e \u000bective and\nimproves the students’ learning experience. The feedback messages have to\nbe clear and concise: counterexamples, simple edits, etc.\nInstructors like independence: In the earlier versions we allowed instructors to\ncontact us to set up courses, which was too complicated for them. Enabling\ncourse management in A utomata Tutor 2.0 allowed us to gain a large user\nbase. Since the introduction of course management, 10 new universities\nstarted using the tool in real classes.\nInstructors love automated grading: This is not surprising, but it is probably the\nfeature that is most responsible for the success of the tool. A utomata Tutor\nhas been deployed in classes with more than 200 students for which manu-\nally grading an homework would take more than fifty man-hours. In Luca\nAceto’s words: “From my perspective (and from that of my TAs), automatic\ngrading is a real bonus. I love to teach, but I really hate to grade a large\nnumber of student assignments. ” Up to today A utomata Tutor has graded\nmore than 50,000 student submission;\nEnd of course surveys: These are really helpful in assessing the features that\ncause confusion and those that actually help the students. Quantitative ques-\ntions which ask for overall user satisfaction are helpful in assessing the value\nof the tool. Open ended questions which ask for suggestions and opinions\ncan guide on what features should be added or removed.\n5.2 What did not work\nBased on our surveys and experiments we observed that:\nVerbose feedback is confusing: During the user study presented in [6] we found\nthat longer feedback messages were confusing and were actually causing\nfrustration among the students. We therefore removed verbose feedback\nmessages and replaced them with counterexamples.\nLong solution-oriented feedback: If the solution is far from a correct one it\nis better to simply tell the student that the solution is incorrect rather than\nproviding a hint on how to fix it. In particular we observed that long edit\nscripts are confusing.\nA single crash can cost many users: Especially in the domain of education, where\nstudent rarely do homework assignments, it is important to provide a robust\ntool on a robust server.\n5.3 The final goal\nIn the next years we want to add more features to A utomata Tutor and be able to\nfully support students learning basic theory of computation concepts. Concretely\nwe plan to add:\nRegular expressions: Although the tool supports them, the grading features are\ncurrently very basic. Defining new metrics and feedback that are tailored\nfor regular expressions is part of our agenda.\nAutomata meta-constructions: An example problem would be: Given two DFAs\n(Q1;q1\n0;\u000e1;F1)and(Q2;q2\n0;\u000e2;F2)define their intersection. Such a feature\nrequires a language that is able to symbolically manipulate the objects in the\ntwo automata signatures. Integrating such a language with a theorem prover,\nlike Coq, might allow us to e \u000bectively grade these complex assignments.\nProofs of non-regularity: These are an important concept on which students often\nstruggle. Some initial attempts at this problem can be found in [5], but\nit is still not clear how to build a user-interaction model that can produce\nfeedback and grades, in particular for proofs based on the pumping lemma.\nProof of DFA correctness: Students need to know how to characterize the lan-\nguages described by each state of an automaton in order to prove by induc-\ntion that the DFA correctly accepts a target language. The logic presented\nin [2] could be used to allow the students to enter such descriptions. A\ninformal attempt to solve this problem is presented in [16].\nContext-free languages: It is not clear how to adapt our current methods for\ngrading and feedback to non-regular languages. Even though equivalence\nof these languages is undecidable, there may be algorithms that work for\nsmall solutions.\nTuring machines: Adding grading for Turing machines would be the next sttep\nafter the grading of context-free languages. This poses the same questions\nas the previous case.\nMOOC deployment: We would like to deploy A utomata Tutor in a real MOOC.\nThis would allow us to leverage a large user base and learn more about the\ntool capabilities from the MOOC’s forum.\n6 Conclusions\nWe presented A utomata Tutor , an online tool that is already being used by 13\nuniversities around the world to teach basic concepts in theory of computation.\nAutomata Tutor is available at http://www.automatatutor.com . It allows\ninstructors to manage courses, and it can currently provide the students with\ngrades and personalized feedback for DFA, NFA, NFA-to-DFA, and regular ex-\npressions constructions. We discussed how the tool evolved and pointed out the\ndriving features behind its success: simple and clear feedback messages, consis-\ntent grades, intuitive drawing interface, and course management for instructors.\nOur ultimate goal is to extend A utomata Tutor to support most undergraduate-\nlevel concepts in theory of computation such as proofs of non-regularity, automata\nmeta-constructions, and context-free grammars.\nAcknowledgements We would like to thank Luca Aceto for his invaluable sup-\nport, feedback, and availability. Not only Luca helped us improving the tool, but\nhe also advertised it to the community. This research is partially supported by\nNSF Expeditions in Computing awards CCF 1138996.\nReferences\n[1] Luca Aceto. Ode to the automata tutor, 2015.\n[2] Rajeev Alur, Loris D’Antoni, Sumit Gulwani, Dileep Kini, and Mahesh\nViswanathan. Automated grading of dfa constructions. In Proceedings of the\nTwenty-Third International Joint Conference on Artificial Intelligence , IJCAI ’13,\npages 1976–1982. AAAI Press, 2013.\n[3] John R. Anderson and Brian J. Reiser. The LISP tutor: it approaches the e \u000bective-\nness of a human tutor. BYTE , 10(4):159–175, April 1985.\n[4] Beatrix Braune, Stephan Diehl, Andreas Kerren, and Reinhard Wilhelm. Animation\nof the generation and computation of finite automata for learning software. In Oliver\nBoldt and Helmut JÃijrgensen, editors, Automata Implementation , number 2214 in\nLecture Notes in Computer Science, pages 39–47. Springer Berlin Heidelberg, Jan-\nuary 2001.\n[5] Pavol Cern `y, Sumit Gulwani, Thomas A Henzinger, Arjun Radhakrishna, and\nDamien Zu \u000berey. Specification, verification and synthesis for automata problems.\n2012.\n[6] Loris D’antoni, Dileep Kini, Rajeev Alur, Sumit Gulwani, Mahesh Viswanathan,\nand Björn Hartmann. How can automatic feedback help students construct au-\ntomata? ACM Trans. Comput.-Hum. Interact. , 22(2):9:1–9:24, March 2015.\n[7] Ethan Fast, Colleen Lee, Alex Aiken, Michael Bernstein, Daphne Koller, and Eric\nSmith. Crowd-scale interactive formal reasoning and analytics. In Proceedings of\nUIST’13 , 2013.\n[8] Sumit Gulwani, Vijay Anand Korthikanti, and Ashish Tiwari. Synthesizing geome-\ntry constructions. SIGPLAN Not. , 46(6):50–61, June 2011.\n[9] Shachar Itzhaky, Sumit Gulwani, Neil Immerman, and Mooly Sagiv. Solving ge-\nometry problems using a combination of symbolic and numerical reasoning. In Ken\nMcMillan, Aart Middeldorp, and Andrei V oronkov, editors, Logic for Programming,\nArtificial Intelligence, and Reasoning , volume 8312 of Lecture Notes in Computer\nScience , pages 457–472. Springer Berlin Heidelberg, 2013.\n[10] Katy Jordan. Mooc completion rates: The data. 2014.\n[11] Antonija Mitrovic. Learning SQL with a computerized tutor. In Proceedings of\nSIGCSE’98 , pages 307–311, New York, NY , USA, 1998. ACM.\n[12] David Patterson. Why are english majors studying computer science? 2013.\n[13] Susan H. Rodger and Thomas Finley. JFLAP - An Interactive Formal Languages\nand Automata Package . Jones and Bartlett, 2006.\n[14] V .S. Shekhar, A. Agarwalla, A. Agarwal, B. Nitish, and V . Kumar. Enhancing\nJFLAP with automata construction problems and automated feedback. In Contem-\nporary Computing (IC3), 2014 Seventh International Conference on , pages 19–23,\nAug 2014.\n[15] Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. Automated feedback\ngeneration for introductory programming assignments. In Proceedings of PLDI’13 ,\npages 15–26, New York, NY , USA, 2013. ACM.\n[16] Matthias F. Stallmann, Suzanne P. Balik, Robert D. Rodman, Sina Bahram,\nMichael C. Grace, and Susan D. High. Proofchecker: an accessible environment\nfor automata theory correctness proofs. SIGCSE Bull. , 39(3):48–52, June 2007.\n[17] New York Times. Instruction for masses knocks down campus walls. 2012.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "FMHCiPMyEw",
"year": null,
"venue": "Bull. EATCS 1985",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=FMHCiPMyEw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A note the expressive power of Prolog",
"authors": [
"Christos H. Papadimitriou"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "083G7fatae",
"year": null,
"venue": "Bull. EATCS 2009",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=083G7fatae",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Theory of Transactional Memory",
"authors": [
"Rachid Guerraoui",
"Michal Kapalka"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HOfq0CxUqhk",
"year": null,
"venue": "Bull. EATCS 2019",
"pdf_link": "http://bulletin.eatcs.org/index.php/beatcs/article/download/593/602",
"forum_link": "https://openreview.net/forum?id=HOfq0CxUqhk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Exploring the Borderlands of the Gathering Problem",
"authors": [
"El Mahdi El Mhamdi",
"Rachid Guerraoui",
"Alexandre Maurer",
"Vladislav Tempez"
],
"abstract": "Problems of pattern formation have been extensively studied in distributed computing. One of this problems is the gathering problem: agents must gather at a same position in a distributed manner. When gathering is not possible, a close problem is the convergence problem. In this article, we investigate the two following questions: (1) Can pro- cesses gather when each process cannot see more that one other process at the same time? (2) Can a gathering behavior be learned by processes? Regarding the first point, we introduce a new model with an extremely restricted visibility: each process can only see one other process (its clos- est neighbor). Our goal is to see if (and to what extent) the gathering and convergence problems can be solved in this setting. We first show that, sur- prisingly, the problem can be solved for a small number of processes (at most 5), but not beyond. This is due to indeterminacy in the case where there are several “closest neighbors” for a same process. By removing this indeterminacy with an additional hypothesis (choosing the closest neighbor according to an order on the positions of processes), we then show that the problem can be solved for any number of processes. We also show that up to one crash failure can be tolerated for the convergence problem. Regarding the second point, we present the first experimental evidence that a gathering behavior can be learned without explicit communication in a partially observable environment. The learned behavior has the same properties as a self-stabilizing distributed algorithm, as processes can gather from any initial state (and thus tolerate any transient failure). Besides, we show that it is possible to scale and then tolerate the brutal loss of up to 90% of agents without significant impact on the behavior.",
"keywords": [],
"raw_extracted_content": "TheDistributed Computing Column\nby\nStefan Schmid\nUniversity of Vienna\nWähringer Strasse 29, AT - 1090 Vienna, Austria\[email protected]\nWith this issue of the distributed computing column, we would like to invite\nyou to two tours, one to exciting unexplored borderlands of the gathering problem,\nand one to the wonderful land of consensus numbers:\n•El Mahdi El Mhamdi, Rachid Guerraoui, Alexandre Maurer, and Vladislav\nTempez investigate the fundamental question whether gathering is still possi-\nble in models where visibility is severely restricted. The authors also initiate\nto study the question whether gathering behavior can be learned without\nexplicit communication in a partially observable environment.\n•Michel Raynal welcomes you to a guided tour on consensus numbers. In addi-\ntion to more ancient results, he also surveys recent contributions related to the\nexistence of an infinity of objects (of increasing synchronization /agreement\npower) at each level of the consensus hierarchy.\nI hope you enjoy your journeys and on this occasion, I would like to thank the\nauthors very much for their contributions to the EATCS Bulletin.\nExploring the Borderlands of\ntheGathering Problem\nEl Mahdi El Mhamdi\nEPFL\[email protected]\nRachid Guerraoui\nEPFL\[email protected]\nAlexandre Maurer\nEPFL\[email protected]\nVladislav Tempez\nLORIA\[email protected]\nAbstract\nProblems of pattern formation have been extensively studied in distributed\ncomputing. One of this problems is the gathering problem: agents must\ngather at a same position in a distributed manner. When gathering is not\npossible, a close problem is the convergence problem.\nIn this article, we investigate the two following questions: (1) Can pro-\ncesses gather when each process cannot see more that one other process at\nthe same time? (2) Can a gathering behavior be learned by processes?\nRegarding the first point, we introduce a new model with an extremely\nrestricted visibility: each process can only see oneother process (its clos-\nest neighbor). Our goal is to see if (and to what extent) the gathering and\nconvergence problems can be solved in this setting. We first show that, sur-\nprisingly, the problem can be solved for a small number of processes (at\nmost 5), but not beyond. This is due to indeterminacy in the case where\nthere are several “closest neighbors” for a same process. By removing this\nindeterminacy with an additional hypothesis (choosing the closest neighbor\naccording to an order on the positions of processes), we then show that the\nproblem can be solved for any number of processes. We also show that up\nto one crash failure can be tolerated for the convergence problem.\nRegarding the second point, we present the first experimental evidence\nthat a gathering behavior can be learned without explicit communication\nin a partially observable environment. The learned behavior has the same\nproperties as a self-stabilizing distributed algorithm, as processes can gather\nfrom any initial state (and thus tolerate any transient failure). Besides, we\nshow that it is possible to scale and then tolerate the brutal loss of up to 90%\nof agents without significant impact on the behavior.\n1Introduction\nAn interesting natural phenomenon is the ability of swarms of simple individuals\nto form complex and very regular patterns: swarms of fishes [78], birds [32], ants\n[37]. . . They do so in a totally distributed manner, without any centralized or irre-\nplaceable leader. Such behaviors are a great source of inspiration for distributed\ncomputing.\nProblems of pattern formation have been extensively studied by the distributed\ncomputing community [72, 74, 11, 2]. In order to prove mathematical results, the\nmodel is of course simplified: the individuals (called agents, robots or processes)\nare usually geometric points in a Euclidean space, operating in “look – compute –\nmove” cycles. A famous example is the circle formation algorithm by Suzuki and\nYamashita [74]. Another family of papers considers robots moving on a graph\n(eg. [34, 39, 54]).\nIn particular, a pattern formation problem which has been extensively studied\nis the gathering problem [4, 24, 26, 40, 57]: processes must gather at a same point\nin a finite time. When gathering is impossible, a close problem is the convergence\nproblem [28, 7]: processes must get always closer to a same point.\nThis apparently simple problem can become surprisingly complex, depending\non the model and hypotheses. We give a few examples below (the list, of course,\nin not exhaustive).\n•Asynchronous system. A first idea is to relax the synchronicity hypothesis.\nIn [66, 23, 25, 29] for instance, the cycles are executed asynchronously –\ne.g., the “look” operation of a robot can happen during the “move” opera-\ntion of another robot. [53] studies the feasibility of asynchronous gathering\non a ring topology, depending on the level of symmetry of the initial config-\nuration. [41] showed that gathering was possible in the asynchronous model\nwhen robots have the same common orientation.\n•Fault tolerance. Another idea is to make the system fault tolerant. The\nfaults can be transient [6, 35] or permanent [5] – e.g., when a robot stops\nmoving forever. [5] and [33] show several impossibility results in the case of\nByzantines failures – i.e., a robot exhibiting an arbitrary malicious behavior.\n[15] proves the necessary and su \u0000cient conditions for convergence in a 1D\nspace in the presence of Byzantine robots.\n•Limited visibility. One can assume that robots only have a limited visi-\nbility range [41, 8]. The usual hypothesis is that the robots can only see\nother robots within a bounded radius. Another possible limit to visibility\nareopaque robots [13, 3]: if a robot C is between two robots A and B, A\ncannot see B. [14] considers a setting with both constraints simultaneously\n(opacity and bounded visibility radius).\n•Limited multiplicity detection. When several robots are allowed to oc-\ncupy the same position, the robots may (or may not) know the multiplicity\nof a given position, that is: the number of robots at this position. When total\nmultiplicity detection is available, a gathering strategy is, for each robot,\nto move to the position with the highest multiplicity. A weaker multiplic-\nity detection hypothesis is that robots can only know if there are “one” or\n“more than one” robots at a given position (global multiplicity detection)\n[52, 53]. In [49, 50], this capacity is restricted to the current position (local\nmultiplicity detection). [31] studies gathering on a grid without multiplicity\ndetection.\n•Fat robots. It is often assumed that robots are geometrical points, without\na volume. Some paper consider the model of “fat robots”, where robots\nactually do have a volume. [3] considers the problem of gathering 4 robots\nmodeled as discs. [30] generalizes this result to nrobots. [14] considers the\nproblem of gathering fat robots with a limited visibility.\nIn this article, we explore two new settings for the gathering problem. Basi-\ncally, we ask ourselves the two following questions:\n1.Can processes gather when each process cannot see more that oneother\nprocess at the same time? (In the following, we call this setting “extremely\nrestricted visibility”.)\n2.Can a gathering behavior be learned by processes?\nGathering with extremely restricted visibility. Consider the following assump-\ntion: each process can only see its closest neighbor (i.e., the closest other process),\nand ignores the total number of processes. To our knowledge, no paper has yet\nconsidered such a minimalist setting. We study to what extent the gathering and\nconvergence problems can be solved in this setting. We assume a synchronous\nscheduler and memoryless processes that cannot communicate with messages.\nThere is an indeterminacy in the case where there are several “closest neigh-\nbors” (i.e., two or more processes at the same distance of a given process). We\nfirst assume that, in this situation, the closest neighbor is arbitrarily chosen by an\nexternal adversary (worst-case scenario).\nIn this scenario, we show that, surprisingly, the problems can only be solved\nfor a small number of processes. More precisely, if nis the number of processes\nanddis the number of dimensions of the Euclidean space, then the gathering\n(resp. convergence) problem can be solved if and only if d=1 or n2 (resp.\nd=1 orn5). Indeed, for larger values of n, there exists initial configurations\nfrom which gathering or convergence is impossible, due to symmetry. The proof\nis constructive: for the small values of n, we provide an algorithm solving the\nproblems. The proof is non-trivial for n=4 and n=5, as several families of\ncases need to be considered.\nTherefore, to solve the problems for larger values of n, one additional hypoth-\nesis must necessarily be added. We remove the aforementioned indeterminacy by\nmaking the choice of the closest neighbor (when there is more than one) deter-\nministic instead of arbitrary (according to an order on the positions of processes).\nThen, we show that the gathering problem is always solved in at most n\u00001 steps\nby a simple “Move to the Middle” (MM) algorithm.\nWe finally consider the case of crash failures, where at most fprocesses lose\nthe ability to move. We show that the gathering (resp. convergence) problem can\nonly be solved when f=0 (resp. f1). When the convergence problem can be\nsolved, the MM algorithm solves it.\nThe technical details are presented in Section 2. Beyond this first work, we\nbelieve that this minimalist model can be the ground for many other interesting\nresults.\nLearning to gather. In previous works, the gathering behavior was obtained by\ngiving an explicit algorithm to each (correct) agent. An alternative approach is\nmachine learning [71], that is: automatically extracting a model from a dataset,\nor from its interactions with the environment. More particularly, Reinforcement\nlearning [77, 73] is the specific machine learning paradigm that enables to ob-\ntain a desired behavior with the simplest feedback from the environment. It is\nparticularly useful in network related problems [67, 12, 47]. In short, reinforce-\nment learning consists, for the program, in receiving rewards andpenalties from\nthe environment, and learning which behavior leads to rewards and which behav-\nior leads to penalties. To our knowledge (see the state of the art in Section 3),\nthe question whether the agents can learn to gather with only simple rewards and\npenalties from the environment (and with no other form of communication than\n“seeing each other”) remains open.\nWe present the first experimental evidence that the answer to this question is\na\u0000rmative: agents can indeed learn a gathering behavior. We show that agents\ncan learn to gather on a one-dimensional ring. The agents are rewarded for being\nin a group and penalized for being isolated.\nA technical di \u0000culty lies in the “combinatorial explosion” of the number of\nstates. To overcome this di \u0000culty, the agents approximate the environment by\ngrouping close positions into clusters: each agent only perceives an approximation\nof the distribution of other agents in each cluster. This enables to keep the learning\nspace constant (i.e., independent of the number of agents and the size of the ring).\nWe show that, surprisingly, the agents manage to gather almost perfectly despite\nthis very rough approximation.\nWe then consider the problem of increasing the number of agents. A natural\nbelief would be that the agents have to “re-learn” to gather in this case. Inter-\nestingly, we show that the learned behavior can directly apply to a much larger\nnumber of agents – namely, if agents have learned to gather in groups of 10, we\nshow that they immediately know how to gather in groups of up to 100. Aside\nfrom saving learning time, the interest of this approach is that such a group of\n100 agents is inherently and deeply robust (fault-tolerant), because it can toler-\nate the loss of up to 90 agents1. We also compare the learned behavior with a\nhardcoded algorithm that moves towards the barycenter of the agents. We thus\nshow that, even with a relatively simple learning scheme, we can reach the same\nperformances as this hardcoded behavior.\nThe technical details are presented in Section 3.\n2Gathering with extremely restricted visibility\nIn Section 2.1, we define the model and the problems. In Section 2.2, we charac-\nterize the class of algorithms allowed by our model, and define a simple algorithm\nto prove the positive results. In Section 2.3, we prove the aforementioned lower\nbounds. In Section 2.4, we remove indeterminacy and show that the gathering\nproblem can be solved for any n. In Section 2.5, we consider the case of crash\nfailures.\n1We do not claim that training a group of 100 agents makes it robust, but that we can easily\nbuild a robust group of 100 agents after training a group of 10 agents (which, by the way, is less\ncostly).\n2.1 Model and problems\nModel. We consider a Euclidean space Sof dimension d(d\u00001). The position\nof each point of Sis described by dcoordinates ( x1,x2,..., xd) in a Cartesian\nsystem. For two points AandBof coordinates ( a1,..., ad) and ( b1,..., bd), let\nd(A,B)=q\n⌃i=d\ni=1(ai\u0000bi)2be the distance between AandB.\nLetPbe a set of nprocesses. 8p2P, letMpbe the position of pinS. Let\n⌦be the set of positions occupied by the processes of P. As several processes\ncan share the same position, 1 |⌦|| P|. The time is divided in discrete steps\nt2{0,1,2,3,... }.\nIf|⌦|=1, the processes are gathered (they all have the same position). If\n|⌦|\u00002,8p2P, let D(p)=min K2⌦\u0000{Mp}d(Mp,K), and let N(p) be the set of\nprocesses qsuch that d(Mp,Mq)=D(p). At a given time t, the closest neighbor\nof a process pis a process of Nparbitrarily chosen by an external adversary. We\ndenote it by C(p).\nWe consider a synchronous execution model. At a given time t, a process p\ncan only see MpandMC(p)(without global orientation), and use these two points\nto compute a new position K. Then, the position of pat time t+1 isK.\nThe processes are oblivious (they have no memory), mute (they cannot com-\nmunicate) and anonymous (they cannot distinguish each other with identifiers).\nNote that this model does not assume multiplicity detection (the ability to count\nthe processes at a same position). The processes do not know n. At t=0, the n\nprocesses can have any arbitrary positions.\nProblems. For a given point G2Sand a given constant ✏, we say that the\nprocesses are ( G,✏)-gathered if, 8M2⌦,d(G,M)✏.\nAn algorithm solves the convergence problem if, for any initial configuration,\nthere exists a point G2Ssuch that, 8✏> 0, there exists a time Tsuch that the\nprocesses are ( G,✏)-gathered 8t\u0000T.\nAn algorithm solves the gathering problem if, for any initial configuration,\nthere exists a point Gand a time Tsuch that the processes are ( G,0)-gathered\n8t\u0000T.\n2.2 Algorithm\nIn this section, we describe all possible algorithms that our model allows. Doing\nso enables us to show lower bounds further – that is, showing that no algorithm\ncan solve some problems in our model. This is not to confuse with the MM\nalgorithm (a particular case, defined below), which is only used to prove positive\nresults.\nHere, an algorithm consists in determining, for any process p, the position of\npat the next step, as a function of MpandMC(p).\nFirst, let us notice that, if the processes are gathered ( |⌦|=1), the processes\nhave no interest in moving anymore. This corresponds to the case where each\nprocess cannot see any “closest neighbor”. Thus, we assume that any algorithm is\nsuch that, when a process pcannot see any closest neighbor, pdoes not move.\nNow, consider the case where the processes are not gathered ( |⌦|\u00002). Let p\nbe the current process, let D=D(p), and let ~xbe the unit vector ( ||~x||=1) directed\nfrom MptoMC(p). There are 2 possible cases.\nCase 1: d=1.The next position of pisMp+fx(D)~x, where fxis an arbitrary\nfunction.\nCase 2: d\u00002.Let\u0000be the axis defined by MpandMC(p). Ifd\u00002, as there is\nno global orientation of processes ( Mpcan only position itself relatively to MC(p)),\nthe next position of pcan only be determined by (1) its position on axis \u0000and\n(2) its distance to \u0000. The di ↵erence here is that, for two given parameters (1)\nand (2), there are several possible positions (2 positions for d=2, an infinity of\npositions for d\u00003). Thus, we assume that the next position (among these possible\npositions) is arbitrarily chosen by an external adversary.\nMore formally, the next position of pisMp+fx(D)~x+fy(D)~y, where fxandfy\nare arbitrary functions, and where ~yis a vector orthogonal to ~xwhich is arbitrarily\nchosen by an external adversary.\nMove to the Middle (MM) algorithm. We finally define one particular algo-\nrithm to show some upper bounds. The Move to the Middle (MM) algorithm con-\nsists, for each process pand at each step, in moving to the middle of the segment\ndefined by MpandMC(p).\nMore formally, if d=1, the MM algorithm is defined by fx(D)=D/2. If\nd\u00002, the MM algorithm is defined by fx(D)=D/2 and fy(D)=0.\n2.3 Lower bounds\nIn this section, we show the two following results.\n•The gathering problem can be solved if and only if d=1 orn2. When it\ncan be solved, the MM algorithm solves it (Theorem 1).\n•The convergence problem can be solved if and only if d=1 orn5. When\nit can be solved, the MM algorithm solves it (Theorem 2).\n2.3.1 Gathering problem\nLet us prove Theorem 1.\nLemma 1. If d=1, the MM algorithm solves the gathering problem.\nProof. Let us show that, if |⌦|\u00002, then |⌦|decreases at the next step.\nAsd=1, let x(K) be the coordinate of point K. Let ( K1,K2,..., Km) be the\npoints of ⌦ranked such that x(K1)<x(K2)<··· <x(Km).8i2{ 1,..., m}, let\nxi=x(Ki). Then, according to the MM algorithm, the possible positions at the\nnext step are: ( x1+x2)/2,(x2+x3)/2,..., (xm\u00001+xm)/2 (at most m\u00001 positions).\nThus, |⌦|decreases at the next step. Therefore, after at most n\u00001 steps, we have\n|⌦|=1, and the gathering problem is solved. ⇤\nLemma 2. If d\u00002and n \u00003, the gathering problem is impossible to solve.\nProof. First, consider the case d=2. Consider an initial configuration where ⌦\ncontains three distinct points K1,K2andK3such that d(K1,K2)=d(K2,K3)=\nd(K3,K1)=D.\nLetGbe the gravity center of the triangle K1K2K3. Let s(1)=2,s(2)=3 and\ns(3)=1.8i2{1,2,3}, letAiandBibe the two half-planes delimited by the axis\n(KiKs(i)), such that Gbelongs to Bi. Let ~vibe the unit vector orthogonal to ( KiKs(i))\nsuch that the point Ki+~vibelongs to Ai. Let ~yi=~viiffy(D)\u00000, and ~yi=\u0000~vi\notherwise.\nLetpbe a process, and let ibe such that Mp=Ki. The external adversary can\nchoose a closest neighbor C(p) and a vector ~ysuch that MC(p)=Ks(i)and~y=~yi.\nThus, at the next step, it is always possible that ⌦contains three distinct points\nalso forming an equilateral triangle. The choice of vectors ~yprevents the particular\ncase where all processes are gathered in point G. We can repeat this reasoning\nendlessly. Thus, the gathering problem cannot be solved if d=2.\nNow, consider the case d>2. The external adversary can choose the ~yvectors\nsuch that the points of ⌦always remain in the same plane, and their behavior is\nthe same as for d=2. Thus, the gathering problem cannot be solved if d>2.⇤\nTheorem 1. The gathering problem can be solved if and only if d =1or n 2.\nWhen it can be solved, the MM algorithm solves it.\nProof. Ifd=1, according to Lemma 1, the MM algorithm solves the gathering\nproblem. If n=1, the gathering problem is already solved by definition. If n=2,\nthe MM algorithm solves the gathering problem in at most one step. Otherwise,\nifd\u00002 and n\u00003, according to Lemma 2, the gathering problem cannot be\nsolved. ⇤\n2.3.2 Convergence problem\nLet us prove Theorem 2.\nWe first introduce some definitions. For a given set of points X✓S, let\nDmax(X)=max {A,B}✓Xd(A,B). Let ⌦(t) be the set ⌦at time t. Let dmax(t)=\nmax {A,B}✓⌦(t)d(A,B) and dmin(t)=min {A,B}✓⌦(t)d(A,B). Let m(A,B) be the middle\nof segment [AB]. Let ↵(K)=p\n1\u00001/(4K2).\nLetR(t)=arg min G2Smax M2⌦(t)d(G,M) (the radius of the smallest enclosing\nball of all processes’ positions). Let Xi(t) be the smallest ithcoordinate of a point\nof⌦(t). We say that a proposition P(t) is true infinitely often if, for any time t,\nthere exists a time t0\u0000tsuch that P(t) is true.\nLemma 3. If there exists a time t such that |⌦(t)|3, the MM algorithm solves\nthe convergence problem.\nProof. If|⌦(t)|=1, the processes are and remain gathered. If |⌦(t)|=2, then\n|⌦(t+1)|=1.\nIf|⌦(t)|=3, consider the following proposition P: there exists t0>tsuch that\n|⌦(t0)|2. If Pis true, the gathering (and thus, convergence) problem is solved.\nNow, consider the case where Pis false.\nLet⌦(t)={A,B,C}. As|⌦(t+1)|=3,⌦(t+1)={m(A,B),m(B,C),m(C,A)}.\nThe center of gravity Gof the triangle formed by the three points of ⌦always\nremains the same, and dmax(t) is divided by two at each step. Thus, 8✏> 0, there\nexists a time Tsuch that the processes are ( G,✏)-gathered 8t\u0000T. ⇤\nLemma 4. Let K \u00001. If R (t)Kdmin(t), then R (t+1)↵(K)R(t).\nProof. If the processes move according to the MM algorithm, then ⌦(t+1)✓S\n{A,B}✓⌦(t){m(A,B)}. Let Gbe such that, 8M2⌦(t),d(G,M)R(t). Let Aand\nBbe two points of Ssuch that d(G,A)=d(G,B)=R(t) and d(A,B)=dmin(t)\n(two such points AandBexist, as dmin(t)2R(t)). Let C=m(A,B). Then,\n8M2⌦(t+1),d(G,M)d(G,C). Thus, R(t+1)d(G,C).\nLetx=d(G,C),y=dmin(t)/2 and z=R(t). Then, z2=x2+y2andx/z=p\n1\u0000(y/z)2. As R(t)Kdmin(t),y/z\u00001/(2K) and x/zp\n1\u00001/(4K2)=↵(K).\nThus, R(t+1)d(G,C)↵(K)R(t). ⇤\nLemma 5. Let A, B, C, D and E be five points (some of them may be identi-\ncal). Let x =d(A,D)/100. Assume d (A,B)x, d(A,C)x, d(A,E)100x\nand d (D,E)\u000040x. Let S ={A,B,C,D,E}and S0=S\n{A,B}✓S{m(A,B)}. Then,\nDmax(S0)0.99Dmax(S).\nProof. Asd(A,D)=100x,Dmax(S)\u0000100x.\nLetM1=m(A,D),M2=m(A,E) and M3=m(D,E). We have d(A,M1)=\n50xandd(A,M2)50x. The maximal value of y=d(A,M3) is reached when\nd(A,D)=d(A,E)=100xandd(D,E)=40x. In this case, with the Pythagorean\ntheorem, we have (100 x)2=y2+(20x)2, and thus y98x.\nThus, max i2{1,2,3}d(A,Mi)98x. Now, suppose that Dmax(S0)>99x. Let\nM4=m(A,B) and M5=m(A,C). This would imply that there exists i2{1,2,3}\nsuch that either d(Mi,M4)>99xord(Mi,M5)>99x, and thus, that either\nd(A,B)>xord(A,C)>x, which is not the case. Thus, Dmax(S0)99x\n0.99Dmax(S). ⇤\nLemma 6. Let t be a given time. If n =5and|⌦(t)|=5, then one of the following\npropositions is true:\n(1)|⌦(t+1)|4\n(2) R (t+1)↵(1000) R(t)\n(3) d max(t+1)0.99dmax(t)\nProof. Suppose that (1) and (2) are false. According to Lemma 4, (2) being false\nimplies that R(t)>1000 dmin(t). Let A0andB0be two points of ⌦(t) such that\nd(A0,B0)=dmin(t). As |⌦(t+1)|=5, it implies that the processes at A0andB0\ndid not both move to m(A0,B0). Therefore, there is a point Cof⌦(t) such that\nd(A0,C)=dmin(t) ord(B0,C)=dmin(t). If d(A0,C)=dmin(t), let A=A0and\nB=B0. Otherwise, let A=B0andB=A0.\nAsR(t)>1000 dmin(t), there exists a point D0of⌦(t) such that d(A,D0)\u0000\n100dmin(t). Let E0be the fifth point of ⌦(t). If d(A,D0)\u0000d(A0,E0), let D=D0\nandE=E0. Otherwise, let D=E0andE=D0.\nFinally, let x=d(A,D)/100. Thus, we have d(A,B)x,d(A,C)xand\nd(A,E)100x. If d(D,E)<40x, then the processes at positions DandE\nboth move to m(D,E), and |⌦(t+1)|=4: contradiction. Thus, d(D,E)\u000040x.\nLetS=⌦(t), and let S0=S\n{A,B}✓S{m(A,B)}. Then, according to Lemma 5,\nDmax(S0)0.99Dmax(S).\nAs the processes move according to the MM algorithm, ⌦(t+1)✓S0, and\ndmax(t+1)Dmax(S0)0.99Dmax(S)=0.99dmax(t). Thus, (3) is true.\nTherefore, either (1) or (2) are true, or (3) is true. ⇤\nLemma 7. Let t be a given time. If |⌦(t)|=4, then one of the following proposi-\ntions is true:\n(1)|⌦(t+1)|3\n(2) R (t+1)↵(1000) R(t)\n(3) d max(t+1)0.99dmax(t)\nProof. Suppose that (1) and (2) are false. According to Lemma 4, (2) being false\nimplies that R(t)>1000 dmin(t). Let AandBbe two points of ⌦(t) such that\nd(A,B)=dmin(t).\nAsR(t)>1000 dmin(t), there exists a point D0of⌦(t) such that d(A,D0)\u0000\n100dmin(t). Let E0be the fourth point of ⌦(t). Ifd(A,D0)\u0000d(A0,E0), let D=D0\nandE=E0. Otherwise, let D=E0andE=D0.\nLetC=Aandx=d(A,D)/100. Thus, we have d(A,B)x,d(A,C)xand\nd(A,E)100x. Ifd(D,E)<40x, then the processes at DandE(resp. AandB)\nboth move to m(D,E) (resp. m(A,B)), and |⌦(t+1)|=2: contradiction. Thus,\nd(D,E)\u000040x.\nLetS=⌦(t), and let S0=S\n{A,B}✓S{m(A,B)}. Then, according to Lemma 5,\nDmax(S0)0.99Dmax(S).\nAs the processes move according to the MM algorithm, |⌦(t+1)|✓S0, and\ndmax(t+1)Dmax(S0)0.99Dmax(S)=0.99dmax(t). Thus, (3) is true.\nTherefore, either (1) or (2) are true, or (3) is true. ⇤\nLemma 8. At any time t, R (t+1)R(t).\nProof. Suppose the opposite: R(t+1)>R(t). Let Gbe a point such that, 8M2\n⌦(t),d(G,M)R(t). If, 8M2⌦(t+1),d(G,M)R(t), then we do not have\nR(t+1)>R(t). Thus, there exists a point Aof⌦(t+1) such that d(G,A)>R(t). Let\nBbe the previous position of processes at position A. As the processes at position\nBmoved to A, according to the MM algorithm, there exists a point Cof⌦(t) such\nthatA=m(B,C). As d(G,B)R(t) and d(G,A)>R(t), we have d(G,C)>R(t).\nThus, there exists a point Cof⌦(t) such that d(G,C)>R(t): contradiction. ⇤\nLemma 9. At any time t, d max(t+1)dmax(t).\nProof. Suppose the opposite: dmax(t+1)>dmax(t). Let AandBbe two points\nof⌦(t+1) such that d(A,B)=dmax(t+1). According to the MM algorithm,\nthere exists four points A1,A2,B1andB2of⌦(t) such that A=m(A1,A2) and\nB=m(B1,B2).\nLetLbe the line containing AandB. Let A0\n1(resp. A0\n2,B0\n1andB0\n2) be the\nprojection of A1(resp. A2,B1andB2) on L. Then, there exists i2{ 1,2}and\nj2{ 1,2}such that d(A0\ni,B0\nj)\u0000d(A,B). Thus, d(Ai,Bj)\u0000d(A,B)=dmax(t):\ncontradiction. ⇤\nLemma 10. Let n 5. Let P 1(t)(resp. P 2(t)) be the following proposition:\nR(t+1)↵(1000) R(t)(resp. d max(t+1)0.99dmax(t)). Let P (t)=P1(t)_P2(t).\nIf, for any time t, |⌦(t)|\u00004, then P (t)is true infinitely often.\nProof. LetP⇤be the following proposition: “ |⌦(t)|=4” is true infinitely often.\nIfP⇤is false, there exists a time t0such that 8t\u0000t0,|⌦(t)|=5. Thus, the\nresult follows, according to Lemma 6. If P⇤is true, there exists an infinite set\nT={t1,t2,t3...}such that 8t2T,|⌦(t)|=4. Then, according to Lemma 7,\nP(t+1) is true 8t2T. Thus, the result follows. ⇤\nLemma 11. Let n 5. Suppose that, for any time t, |⌦(t)|\u00004. Then, for any time\nt, there exists a time t0>t such that R (t0)↵(1000) R(t).\nProof. Suppose the opposite: there exists a time t0such that, 8t>t0,R(t)>\n↵(1000) R(t0).\nConsider the propositions P1(t) and P2(t) of Lemma 10. Then, 8t\u0000t0,P1(t)\nis false. Thus, according to Lemma 10, it implies that P2(t) is true infinitely often.\nLett0>t0be such that, between time t0and time t0,P2(t) is true at least 200\ntimes. According to Lemma 9, for any time t, we have dmax(t+1)dmax(t).\nThus, dmax(t0)0.99200dmax(t0)dmax(t0)/4. For any time t,dmax(t)\u0000R(t) and\ndmax(t)2R(t). Thus, R(t0)R(t0)/2↵(1000) R(t0): contradiction. Thus, the\nresult follows. ⇤\nLemma 12. Let G be a point such that, 8M2⌦(t),d(G,M)R(t). Then,\n8M2⌦(t+1),d(G,M)R(t).\nProof. Suppose the opposite: there exists a point Kof⌦(t+1) such that d(G,K)>\nR(t). According to the MM algorithm, there exists two points AandBof⌦(t) such\nthatK=m(A,B). Then, as d(G,K)>R(t), either d(G,A)>R(t) ord(G,B)>R(t):\ncontradiction. Thus, the result follows. ⇤\nLemma 13. 8i2{1,..., d}and for any two instants t and t0>t,|Xi(t0)\u0000Xi(t)|\n2R(t).\nProof. For any point M, letxi(M) be the ithcoordinate of M. Let Gbe a point such\nas described in Lemma 12. According to Lemma 12, 8M2⌦(t+1),|xi(M)\u0000\nxi(G)|R(t). By induction, 8t0>tand8M2⌦(t0),|xi(M)\u0000xi(G)|R(t). In\nparticular, |Xi(t)\u0000xi(G)|R(t) and |Xi(t0)\u0000xi(G)|R(t). Thus, |Xi(t0)\u0000Xi(t)|\n2R(t). ⇤\nLemma 14. Let(uk)kbe a sequence, Let ↵2]0,1[and let N be an integer. If\n8k\u0000N,|uk+1\u0000uk|↵k, then (uk)kconverges.\nProof. As↵2]0,1[,S↵=1+↵+↵2+↵3+...converges. Let ✏> 0. Let\nK=log(✏/S↵)/log↵Then, ↵KS↵=✏.\nLetk\u0000max( K,N) and let m>k.|um\u0000uk|⌃i=m\u00001\ni=k|ui+1\u0000ui|⌃i=m\u00001\ni=k↵i\n↵kS↵↵KS↵=✏\nThus, ( uk)kis a Cauchy sequence and it converges. ⇤\nLemma 15. Let↵2]0,1[. If, for any time t, there exists a time t0>t such that\nR(t0)↵R(t), then the MM algorithm solves the convergence problem.\nProof. Lett0be an arbitrary time. 8k\u00000, we define tk+1>tkas the first time\nsuch that R(tk+1)↵R(tk). By induction, 8k\u00000,R(tk)↵kR(t0).\nLeti2{1,..., d}. According to Lemma 13, 8k\u00000, we have |Xi(tk+1)\u0000Xi(tk)|\n2R(tk)2↵kR(t0).8k\u00000, let uk=Xi(tk)/(2R(t0)). Then, 8k\u00000,|uk+1\u0000uk|↵k.\nAccording to Lemma 14, the sequence ( uk)kconverges and so does ( Xi(tk))k.\nLetLibe the limit of ( Xi(tk))k, and let Gbe the point of coordinates ( L1,L2,..., Ld).\nR(tk) decreases exponentially with k. Then, 8✏> 0, there exists an integer k\nsuch that R(tk)<✏ / 2. According to Lemma 8, 8t>tk,R(t)\u0000R(tk). Therefore, the\nprocesses are ( G,✏)-gathered 8t\u0000tk, and the convergence problem is solved. ⇤\nLemma 16. If d=1or n5, the MM algorithm solves the convergence problem.\nProof. Ifd=1, according to Lemma 1, the MM algorithm solves the gathering\nproblem, and thus the convergence problem. Now, suppose that n5.\nSuppose that, for any time t,|⌦(t)|\u00004. Then, according to Lemma 11 and\nLemma 15, the MM algorithm solves the convergence problem. Otherwise, i.e., if\n|⌦(t)|3, then according to Lemma 3, the MM algorithm solves the convergence\nproblem. ⇤\nLemma 17. If d\u00002and n \u00006, the convergence problem is impossible to solve.\nProof. Assume the opposite: there exists an algorithm that always solves the con-\nvergence problem for d\u00002 and n\u00006.\nFirst, assume that ⌦contains 3 points, as described in the proof of Lemma 2.\nConsider the infinite execution described in the proof of Lemma 2. Let Gbe the\nbarycenter of these 3 points.\nLetPbe the following proposition: there exists a constant Dsuch that the\ndistance between Gand any of the 3 points of ⌦is at most D.\nIfPis false, then by definition, the convergence problem cannot be solved.\nWe now consider the case where Pis true.\nIfPis true, then consider the following case: ⌦contains 6 points K1,K2,K3,\nK4,K5andK6.K1,K2andK3are arranged such as described in the proof of\nLemma 2, and so are K4,K5andK6. Let G(resp G0) be the barycenter of the\ntriangle formed by K1,K2andK3(resp. K4,K5andK6). Assume that d(G,G0)=\n10D.\nNow, assume that the points of the two triangles respectively follow the infinite\nexecution described in the proof of Lemma 2. Then, the distance between any two\nof the 6 points is always at least 8 D, and the convergence problem cannot be\nsolved. ⇤\nTheorem 2. The convergence problem can be solved if and only if d =1or n5.\nWhen it can be solved, the MM algorithm solves it.\nProof. The result follows from Lemma 16 and Lemma 17. ⇤\n2.4 Breaking symmetry\nWe showed that the problems were impossible to solve for n\u00006. This is due to\nparticular configurations where a process phas several “closest neighbors” (i.e.,\n|Np|>1). Until now, we assumed that the actual closest neighbor C(p) ofpwas\nchosen in Npby an external adversary.\nWe now assume that, whenever |Np|>1,C(p) is chosen deterministically,\naccording to an order on the positions of processes. Namely, we assume that\nthere exists an order “ <” such that any set of distinct points can be ordered from\n“smallest” to “largest” ( A1<A2<A3<··· <Ak).\nLetL(p) be the largest element of Np, that is: 8q2Np\u0000{ L(p)},Mq<ML(p).\nWe now assume that, for any process p,C(p)=L(p). With this new hypothesis,\nwe show the following result: 8n\u00002, the MM algorithm solves the gathering\nproblem in n\u00001 steps, and no algorithm can solve the gathering problem in less\nthatn\u00001 steps (Theorem 3).\nProof\nLemma 18. 8n\u00002, no algorithm can solve the gathering problem in less than\nn\u00001steps.\nProof. Suppose the opposite: there exists an algorithm Xsolving the gathering\nproblem in less than n\u00001 steps.\nFirst, consider a case with two processes, initially at two distinct positions.\nThen, eventually, the two processes are gathered. Let tbe the first time where the\ntwo processes are gathered. Let AandBbe their position at time t\u00001, and let\nD=d(A,B). By symmetry, the two processes should move to m(A,B) at time t.\nThus, with algorithm X, whenever a process pis such that d(Mp,MC(p))=D,p\nmoves to m(Mp,MC(p)) at the next step.\nLetK(x) be the point of coordinates ( x,0,0,..., 0). Now consider nprocesses,\na set ⌦(0)=S\ni2{0,...,n\u00001}{K(iD)}, and an order such that, 8x<y,K(x)<K(y).2\nLet us prove the following property Pkby induction, 8k2{0,..., n\u00001}:\n⌦(k)=S\ni2{0,...,n\u0000k\u00001}{K((i+k/2)D)}.\n•P0is true, as ⌦(0)=S\ni2{0,...,n\u00001}{K(iD)}.\n•Suppose that Pkis true for k2{0,..., n\u00002}. Then, according to algorithm\nX, the processes at position K((n\u0000k\u00001+k/2)D) moves to K((n\u0000k\u00001+(k\u0000\n1)/2)D), and 8i2{0,..., n\u0000k\u00002}, the processes at position K((i+k/2)D)\nmove to K((i+(k+1)/2)D). Thus, Pk+1is true.\n2As this is a lower bound proof, our goal here is to exhibit one particular situation where no al-\ngorithm can solve the problem in less than n\u00001 steps. Thus, we choose a worst-case configuration\nwith a worst-case order.\nTherefore, 8t2{0,..., n\u00002},|⌦(t)|\u00002, and the processes are not gathered:\ncontradiction. Thus, the result follows. ⇤\nWe now assume that the processes move according to the MM algorithm.\nLemma 19. Let p and q be two processes. If there exists a time t where M p=Mq,\nthen at any time t0>t, M p=Mq.\nProof. Consider the configuration at time t. According to our new hypothesis,\nC(p)=C(q). Let K=m(Mp,MC(p))=m(Mq,MC(q)). According to the MM\nalgorithm, pandqboth move to K. Thus, at time t+1, we still have Mp=Mq.\nThus, by induction, the result. ⇤\nLemma 20. At any time t, if the processes are not gathered, there exists two\nprocesses p and q such that M p,Mq,p=C(q)and q =C(p).\nProof. Let\u0000=min {A,B}✓⌦(t)d(A,B). Let Zbe the set of processes psuch that\nd(Mp,MC(p))=\u0000. Let Z0=S\np2Z{p,C(p)}.\nLetAbe the point of Z0such that, 8M2Z0\u0000{ A},M<A. Let pbe a process\nat position A.\nLetqbe the largest element of Np, that is: 8q02Np\u0000{ q},Mq0<Mq. By\ndefinition, Mp,Mq. Thus, according to our new hypothesis, q=C(p).\nThen, note that pis also the largest element of Nq:8p02Nq\u0000{ p},Mp0<Mp.\nThus, p=C(q). Thus, the result follows. ⇤\nLemma 21. At any time t, if the processes are not gathered, then |⌦(t+1)|\n|⌦(t)|\u00001.\nProof. Letpandqbe the processes described in Lemma 20. Let K=m(Mp,Mq).\nThen, according to Lemma 20, the processes at position MpandMqboth move\nto position K. Let X=⌦(t)\u0000{ Mp,Mq}. According to Lemma 19, the processes\noccupying the positions of Xcannot move to more than |X|new positions. Thus,\n|⌦(t+1)|is at most |⌦(t)|\u00001. Thus, the result follows. ⇤\nLemma 22. 8n\u00002, the MM algorithm solves the gathering problem in at most\nn\u00001steps.\nProof. According to Lemma 21, there exists a time tn\u00001 such that |⌦(t)|=\n1. Let Abe the only point of ⌦(t). Then, according to the MM algorithm, the\nprocesses do not move from position Ain the following steps. Thus, the result\nfollows. ⇤\nTheorem 3. 8n\u00002, the MM algorithm solves the gathering problem in n \u00001\nsteps, and no algorithm can solve the gathering problem in less that n \u00001steps.\nProof. The result follows from Lemma 18 and Lemma 22. ⇤\n2.5 Fault tolerance\nWe now consider the case of crash failures : some processes may lose the ability\nto move, without the others knowing it. Let C✓Pbe the set of crashed processes\n(the other processes are called “correct”), and let Sc=S\np2C{Mp}(i.e., the set of\npositions occupied by crashed processes). Let f=|Sc|.\nWe prove the two following results.\n•The gathering problem can only be solved when f=0 (Theorem 4).\n•The convergence problem can be solved if and only if f1. When f1,\nthe MM algorithm solves it (Theorem 5).\nProof\nWe say that a process pisattracted if there exists a sequence of processes ( p1,...,\npm) such that p=p1,pm2C, and 8i2{ 1,..., m\u00001},C(pi)=pi+1.A\nloop is a sequence of correct processes ( p1,..., pm) such that C(pm)=p1and,\n8i2{ 1,..., m\u00001},C(pi)=pi+1.A pair is a loop with 2 processes. Let\n⌦0=S\np2P\u0000C{Mp}(i.e., the set of positions occupied by correct processes). Let\n⌦0(t) be the state of ⌦0at time t.\nLemma 23. Consider an algorithm for which there exists w such that f x(w)=w\nand f y(w)=0. Then, this algorithm cannot solve the gathering nor the conver-\ngence problem.\nProof. Assume the opposite. Consider a situation where ⌦= {A,B}, with d(A,B)\n=w. Then, according to the algorithm, the processes at position AandBswitch\ntheir positions endlessly, and neither converge nor gather: contradiction. Thus,\nthe result follows. ⇤\nTheorem 4. The gathering problem can only be solved when f =0.\nProof. Iff\u00002, by definition, the processes cannot be gathered. Now, suppose\nf=1.\nSuppose the opposite of the claim: there exists an algorithm solving the gath-\nering problem when f=1. Let Pbe the following proposition: there exists two\npoints AandBsuch that all crashed processes are in position A, and all correct\nprocesses are in position B.\nConsider an initial configuration where Pis true. As the algorithm solves the\ngathering problem, according to Lemma 23, the next position of correct processes\ncannot be A. Thus, Pis still true at the next time step, with a di ↵erent point B.\nTherefore, by induction, Pis always true, and the processes are never gathered:\ncontradiction. Thus, the result follows. ⇤\nLemma 24. If there exists a process p which is not attracted, then there exists a\nloop.\nProof. Suppose the opposite: there is no loop. Let p1=p.8i2{ 1,..., n}, let\npi+1=C(pi). We prove the following property Piby induction, 8i2{1,..., n+1}:\n(p1,..., pi) are idistinct processes.\n•P1is true.\n•Suppose that Piis true for some i2{ 1,..., n}. As there is no loop, we\ncannot have pi+12{ p1,..., pi}. Thus, Pi+1is true.\nThus, Pn+1is true, and there are n+1 distinct processes: contradiction. Thus,\nthe result follows. ⇤\nLemma 25. All loops are pairs.\nProof. Let ( p1,..., pm) be a loop. Let \u0000=min i2{1,...,m}d(Mpi,MC(pi)). Let Zbe\nthe set of processes of {p1,..., pm}such that d(Mpi,MC(pi))=\u0000. Let Z0=S\np2Z\n{p,C(p)}.\nLetpbe the process such that, 8q2Z0such that Mp,Mq,Mp>Mq. Let\nq=C(p). As C(q) is the closest neighbor of p,C(q)2Z0. Then, according to the\ndefinition of p,C(q)=p.\nTherefore, ( p1,..., pm) is either ( p,q) or ( q,p). Thus, the result follows. ⇤\nLemma 26. If there exists a pair, then |⌦0(t+1)|| ⌦0(t)\u00001|.\nProof. According to the algorithm, two processes at the same position at time\ntare at the same position at time t+1. Let ( p,q) be a pair. Then, according\nto the algorithm, the processes at positions MpandMqmove to m(Mp,Mq), and\n|⌦0(t+1)|| ⌦0(t)\u00001|. ⇤\nLemma 27. There exists a time t Asuch that, for any time t \u0000tA, all correct\nprocesses are attracted.\nProof. Suppose the opposite. Then, after a finite number of time steps, at least\none correct process is not attracted. Thus, according to Lemma 24, there exists a\nloop. According to Lemma 25, this loop is a pair. Then, according to Lemma 26,\n|⌦0|decreases.\nWe can repeat this reasoning n+1 times, and we then have |⌦0|<0: contra-\ndiction. Thus, the result follows. ⇤\nLemma 28. Suppose f =1. Let p be an attracted process, and let L be the\ndistance between p and the crashed processes. Then, d (Mp,MC(p))\u0000L/n.\nProof. Suppose the opposite: d(Mp,MC(p))<L/n. As pis attracted, there exists\na sequence of processes ( p1,..., pm) such that p=p1,pm2C, and 8i2{ 1,...,\nm\u00001},C(pi)=pi+1.\n8i2{ 1,..., m\u00002}, we have d(Mpi,Mpi+1)\u0000d(Mpi+1,Mpi+2). Indeed, sup-\npose the opposite. Then, C(pi+1)=pi+2,d(Mpi+1,MC(pi+1))>d(Mpi+1,Mpi), and\nC(pi+1) is not a closest neighbor of pi+1: contradiction. Thus, d(Mpi,Mpi+1)\u0000\nd(Mpi+1,Mpi+2).\nThus, 8i2{ 1,..., m\u00001},d(pi,pi+1)<L/n. Therefore, d(p1,pm)(m\u0000\n1)L/n<L: contradiction. Thus, the result follows. ⇤\nLemma 29. Let f =1, and let X be the position of crashed processes. Let L =\nmax p2Pd(X,Mp). Let L (t)be the value of L at time t. Suppose that all correct\nprocesses are attracted. Then, for any time t, L (t+1)k(n)L(t), where k (n)=p\n1\u00001/(2n)2.\nProof. At time t+1, let pbe a process such that d(X,Mp)=L(t+1). Let Kbe\nthe position of patt+1. Then, according to the algorithm, at time t, there exists\ntwo processes qandrat position AandBsuch that K=m(A,B).\nLetL0=max( d(X,A),d(X,B)). Let q02{ q,r}be such that d(X,Mq0)=L0.\nThen, according to Lemma 28, d(Mq0,MC(q0))\u0000L0/n. Let r0be the other process\nof{q,r}. Then, the position of rmaximizing d(X,K) is such that d(X,Mr0)=L0.\nTherefore, according to the Pythagorean theorem, ( L(t+1))2is at most L02\u0000\n(L0/(2n))2, and L(t+1)k(n)L0k(n)L(t). Thus, the result follows. ⇤\nLemma 30. If f=1, the MM algorithm solves the convergence problem.\nProof. According to Lemma 27, there exists a time tAafter which all correct pro-\ncesses are attracted. We now suppose that t\u0000tA. Let ✏>0. Let Xbe the position\nof crashed processes, and let L=max p2Pd(X,Mp). As k(n)=p\n1\u00001/(2n)2<1,\nletMbe such that k(n)ML<✏. Then, according to Lemma 29, at time tA+M, all\nprocesses are at distance at most ✏from X. Thus, the result follows. ⇤\nTheorem 5. The convergence problem can be solved if and only if f 1. When\nf1, the MM algorithm solves it.\nProof. When f\u00002, there exists at least two crashed processes that will stay at\nthe same position forever. Thus, the convergence problem cannot be solved.\nWhen f1, according to Lemma 30, the MM algorithm solves the conver-\ngence problem. Thus, the result follows. ⇤\n2.6 Future works\nThis first work can be the basis for many extensions. For instance, we could\nconsider a more general scheduler (e.g. asynchronous). We could investigate how\nresilient this model is to crash or Byzantine failures. We could also consider the\ncase of voluminous processes, that cannot be reduced to one geometrical point.\n3Learning to gather\nIn Section 3.1, we give a state of the art of reinforcement learning in multi-agent\nsystems w.r.t. the gathering problem. In Section 3.2, we present the Q-learning\ntechnique (with eligibility trace), then a precise formulation of the gathering prob-\nlem in a Q-learning framework. In particular, we describe which state and actions\nare used to model the gathering problem in Q-learning. In Section 3.3, we explicit\nthe numerical parameters used to implement our model. For pedagogical reasons,\nwe first present results for a default setting; then, we show that the learned behav-\niors can be reused with more agents.\n3.1 State of the art\nReinforcement learning [73, 46] consists in taking simple feedback from the envi-\nronment to guide learning. The general idea is to associate rewards and penalties\nto past situations in order to learn how to act in future ones. The principle di ↵ers\nfrom that of supervised learning [42, 46] by the nature of the feedback. In super-\nvised learning, an agent is taught how to perform precisely on several examples.\nIn reinforcement learning, the agent only gets an appreciation feedback from the\nenvironment. For instance, in dog training, dogs are rewarded when doing correct\nactions and punished when behaving badly. The advantage here is the possibility\nto have a feedback in situations where the correct behavior is unknown. Several\nsuccessful AI approaches use reinforcement learning, one spectacular example\nbeing the performance of AlphaGo [70] defeating the world Go champion Lee\nSedol.\nSo far, reinforcement learning has mainly been used in situations with only\none learning agent ( single-agent systems), with important results [44, 38, 43, 48,\n60, 68].\nMulti-agent systems involve numerous independent agents interacting with\neach other. Many works on multi-agent reinforcement learning consider problems\nwhere only 2 or 3 agents are involved [10, 16, 27, 59, 65, 76, 80]. Some deal with\ncompetitive games (e.g. zero-sum games) [1], where agents are rewarded at the\nexpense of others. Other tackle collaborative problems, but the reward is global\nand centralized [75]. The algorithm proposed in [21] achieves convergence, safety\nand targeted optimality against memory-bounded adversaries in arbitrary repeated\ngames. [63] presents the first general framework for intelligent agents to learn to\nteach in a multiagent environment.\nThe domain of evolutionary robotics [36] studies how the behavior of agents\ncan evolve through “natural selection” mechanisms, with [18] or without [56]\ncommunication. In this paper, we focus on behaviors than can be learned “within\na lifetime”, through rewards and punishments.\nIn general, communication mechanisms are used to share information among\nagents [17, 51, 55, 62, 67, 69, 81] in order to increase the learning speed. Still, in\nsome cases, communication between independent agents is di \u0000cult, impossible,\nor at least very costly [79, 9]. In these situations, it might be useful to devise a\nlearning process that does not rely on communication.\nYet, so far, very few approaches considered a genuinely distributed setting\nwhere each agent is rewarded individually, and where agents do notcommunicate.\nIn [61], the problem and the constraints are similar to our work, but the rewards\nare given for taking an action instead of reaching a state. Consequently, the final\nbehavior is predetermined by the model itself. In [19], even if the constraints are\nsimilar (cooperative task, no communication and individual rewards), the problem\ntackled is fundamentally di ↵erent: the task only requires the cooperation of agents\nby groups of two (not of all agents simultaneously).\n3.2 Model\n3.2.1 Q-learning\nAs recalled in the previous section, the goal of reinforcement learning is to make\nagents learn a behavior from reward-based feedback. In this paper, we work with\na widely used reinforcement learning technique called Q-learning [77, 73, 80, 51,\n20, 38]. More specifically, we use Q-learning with eligibility trace [58, 73] as\nexplained in what follows.\nQ-learning was initially devised for single agent problems. Here, we consider\na multi-agent system where each agent has it own learning process. We describe\nin the following the learning model of oneagent taken independently.\nLetAbe a set of actions , and let Sbe a set of states (representing all the\nsituations in which the agent can be). The sets AandScontain a finite number of\nelements. In each state s, the agent may chose between di ↵erent actions a2A.\nEach action aleads to a state s0, in which the agent receives either a positive\nreward, a negative reward or no reward at all. The objective of Q-learning is to\ncompute the cumulative expected reward for visiting a given state. Intuitively,\nthis is materialized by the fact that learning , in Q-learning, is all about updating\nthe Q-value using the mismatch between the previous Q-value and the observed\nreward.\nLet⇡:S!Abe the policy function of an agent – i.e., a function returning an\naction to take in each state.\nLetX⇡,s0\ntbe the state in which the agent is after tsteps, starting from state s0\nand following the policy ⇡. In particular, X⇡,s\n0=s.\nLetr:S!Rbe the reward function associating a reward to each state.\nThecumulative expected reward over a period I=~0,Nof state sis\nX\nt2IE(\u0000tr(X⇡,s\nt))\nwhere \u00002[0,1] is a discount parameter modulating the importance of long term\nrewards. The long term rewards become more and more important when \u0000is close\nfrom 1.\nWhen predicting the best transition from one state to another (by taking a given\naction) is di \u0000cult or impossible, it is useful to compute a cumulative expected\nreward of a couple ( s,a).\nUnder the assumption that each couple ( s,a) is visited an infinite number of\ntime, it is possible, following the law of large numbers, to estimate without bias\nthe expected cumulative reward by sampling [77], i.e by trying state-action cou-\nples and building an estimator of the expected reward. We denote this estimator\nQ(s,a), and call it the Q-value of the state-action couple ( s,a). The following\nformula is the usual update rule to compute an estimator of the Q-value.\nQt+1(s,a)=(1\u0000⌘)Qt(s,a)+⌘(r(Xt+1)+\u0000max\na0(Qt(s,a0)))\nif action ais taken in state sat step t.\nQt+1(s0,a0)=Qt(s0,a0)\notherwise.\nHere, ⌘is a parameter called the learning rate that modulates the importance\nof new rewards over old knowledge. Qtis the estimate of the cumulative expected\nreward after tsamples.\nA complementary approach to get better estimations of Q-values with fewer\nsamples is to use eligibility trace [58, 73]. The idea is to keep trace of older\ncouples ( s,a) until a reward is given, and to propagate a discounted reward to the\ncouples ( s,a) that led to the reward several steps later. Formally, for each state s,\na value (eligibility) et(s) is attributed. eis initialized at e0(s)=0 for every state s\nthen updated as follows:\nifst=s,\net+1(s)=\u0000\u0000⇤et(s)+1\notherwise,\net+1(s)=\u0000\u0000⇤et(s).\nUsing the eligibility trace, Q-values are updated by the scaling of the update\nrule described above with eligibility values. The factor \u0000\u0000used in the update of\nthe eligibility acts as a discount in time: older visited states get less reward than\nrecent visited states.\nIn addition to update rules for learning, we need a policy for choosing actions.\nAn✏-greedy policy ⇡is a stochastic policy such that: (1) with probability (1 \u0000✏),\n⇡(s)=awhen ( s,a) yields the highest expected cumulative reward from state\ns, and (2) with probability ✏, a random action is chosen in A. The parameter ✏\nis called the exploration rate and modulates the trade-o ↵between exploration of\nnew and unknown states (to obtain new information) and exploitation of current\ninformation (to sample valuable states more precisely and thus be rewarded).\n3.2.2 Setting\nWe consider a ring topology. This is a simple topology for a bounded space that\navoids non-realistic borders e ↵ects (i.e no need to \"manually\" replace an agent in\nthe middle of the states-space if the agent reaches the border in the case of a square\nfor example). There are npositions {0,..., n\u00001}.8k2{0,..., n\u00002}, positions k\nandk+1 are adjacent, and positions n\u00001 and 0 are also adjacent. Each agent has a\ngiven position on the ring. This space has only one dimension, but our results may\nbe extended to higher dimension spaces by applying the approach independently\non each dimension.\nThe time is divided into discrete steps 1 ,2,3,.... At the beginning of a given\nstept, an agent is at a given position. The possible actions are: go left (i.e. increase\nposition), go right (i.e. decrease position) or do not move .\nThe current state of each agent is determined by the relative positions of other\nagents. However, we cannot associate a state to each combination of position of\nother agents, because of “combinatorial explosion”. Thus, in order to limit the\nmaximal number of states, each agent perceives an approximation of the positions\nof other agents. Besides, a state must not depend on the number of agents, in order\nto have a scalable model and to tolerate the loss of agents.\nThus, our state model is the following. The space is divided into groups of\nclose positions called sectors . Each agent does not perceive the exact number of\nagents per sector, but the fraction of the total population in each sector. A state\nis given by the knowledge of the fractions of the total population in each sector\nwith a precision of 10% (i.e. the possible values are multiples of 10%, rounded so\nthat the sum of the fractions equals 100%). The choice of 10% precision here is\nan arbitrary value to reduce computational cost, this value can be optimized as an\nhyper-parameter.\nThe delimitation of the sectors is not absolute but relative to the position of\neach agent: each agent has its own sector delimitation centered around itself.\nFigure 1: Default sector delimitation (on a ring of size 13). The Central sector\ncontains 3 positions centered around 0 (i.e., the position of the current agent).\nNear sectors contain two positions each, adjacent to the Central sector. Same for\nFarsectors and the Opposite sector.\nThis delimitation is set to 6 sectors (as for the precision value of 10% described\nabove, this choice can be left as an hyper-parameter, but optimizing it is out of the\nscope of this work). The first sector is centered around the agent position (its size\ncorresponds to the size of the neighborhood where we expect the other agents to\ngather). This sector is the Central sector. The agents in the central sector of a given\nagent are called its neighbors . Two more sectors are adjacent to the central sector,\ntheNear Right andNear Left sectors. The Far left andFar Right are a second\nlayer after the near sectors. Finally, the Opposite sector is the sector diametrically\nopposed to the Central one. The exact size of each sector is a parameter of the\nproblem, as well as the number of agents and the number of positions.\nAn example of sectors delimitation is given in Figure 1, for a ring of size 13.\n3.2.3 Rewards\nEach agent is rewarded if it has a large enough number of neighbors (i.e., more\nthan a certain fraction of the total population is in its central sector). Each agent\nis penalized if it has not enough neighbors (i.e., less than a certain fraction of the\npopulation is in its central sector).\n3.2.4 Learning process\nThe learning phase is organized as follows:\n•The initial positions of agents are random, following a uniform distribution.\n•At each step, each agent decides where to go with a ✏-greedy policy.\n•When all the decisions are taken, all the agents move simultaneously.\n•After moving, they consider their environment, get rewards and update their\nQ-values with respect to these rewards.\n•The learning phase is subdivided in cycles of several steps. At the end of\neach cycle, the position of agents is reset to random positions. This ensure\nthat the environment is diverse enough to learn a robust behavior. After\nposition reset, the agents can move again for another cycle.\nThe duration of a cycle is set proportional to the size of the ring (e.g. 5 times\nthe size of the ring) in order to give enough time to the agents to gather: this\ntime depends on the distance they have to travel, and this distance depends on the\nsize of the ring. To update Q-values, Q-learning with eligibility traces is used.\nEligibility traces are reset at the end of each cycle, and each time, a reward is\ngiven to an agent.\n3.2.5 Problem\nIntuitively, the goal is to make the agents learn a gathering behavior, that is: within\na reasonable time in a same cycle, the agents become (and remain) reasonably\nclose to each other. This criteria is voluntarily informal, and its satisfaction will\nbe evaluated with several metrics in the next section.\nMore precisely, the problem consists in computing, for each agent, a value\nQ(s,a) for each couple state-action ( s,a). This value indicates which action a\nto take in state sin order to increase the likelihood of obtaining a reward. For\ninstance, the description given in subsection A states that, with probability 1 \u0000✏,\naction ais chosen if it maximizes Q(s,a) among all possible actions from state s.\nOur objective is to verify experimentally that the Q-values learned in this fash-\nion lead to an e \u0000cient gathering of the agents, i.e, that reinforcement learning,\nwith rewards being given to actions that improve an agents’ neighbourhood situa-\ntion, lead to e \u0000cient gathering behaviours at the level of the group.\nFigure 2: Time needed to form a group from random initial positions for 10 agents\non a ring of size 13. Each point is the mean over 5 cycles of the average time to\nform a group (in the following, we simply say “average over X cycles”).\n3.3 Results\nWe consider a ring of size 13, with a sector division such as described in Figure 1.\nAgroup exists if at least one agent has more than 80% of the population as neigh-\nbors. An agent is given a reward of value 100 if the fraction of neighbors is more\nthan 80% of the population, and a penalty of value \u00005 if it is 10% or less.\nThe exploration rate is ✏=0.1, the learning rate is ⌘=0.1 and the discount\nfactor is \u0000=0.95. The duration of a cycle is 65 steps (around 5 times the size of\nthe ring), and the duration of the learning phase is 5000 cycles.\n3.3.1 Results for 10 agents\nWe first consider a population of 10 agents. To assess the quality of the learned\nbehavior, we compute several metrics. We first consider the time needed to form\na group from random initial positions, and see how it evolves during the learn-\nFigure 3: Maximum and minimum number of neighbors, in percent of the total\npopulation, during learning, for 10 agents on a ring of size 13. Maximum is black\nsquares and minimum is white triangles. The dashed line is the minimum number\nof neighbors needed to be considered in the group: 80% of the total number of\nagents. Each point is an average over 325 steps, including time before creation of\nthe first group.\nFigure 4: Evolution over time of the number of neighbors at each position of thering during a cycle. Larger dots represent a higher number of neighbors. Positionswhere agents are considered to be in the group are in black, others in white.\nFigure 5: Time needed to form a group from random initial position for 10 agents\non a ring of size 13. Each point is an average over 75 cycles. Learning phase is\n75 000 cycles long.\nFigure 6: Maximum and minimum number of neighbors, during learning, for 10\nagents on a ring of size 13. Maximum is black squares and minimum is white\ntriangles. The dashed line is the minimum number of neighbors needed to be\nconsidered in the group (i.e. 80% of the total number of agents). Each point is\nan average over 75 cycles (4875 steps), including time before creation of the first\ngroup. The learning phase is 75 000 cycles long.\ning phase. Then, to ensure that groups are not only formed but also maintained,\nwe observe the evolution of the number of neighbors among the population. To\nevaluate the learning qualitatively, we look at the exact behavior of agents at the\nbeginning, middle and end of the learning phase. Finally, we study the impact of\na longer learning phase.\nTime to form a group. Figure 2 shows the time that agents need to gather and\nform the first group (i.e., at least one agent is rewarded), starting from random\ninitial positions. We observe that this time decreases during the learning phase\nand stabilizes around 10 steps.\nNumber of neighbors. Figure 3 shows the minimal and maximal number of\nneighbors over all agents. When the maximal number of neighbors is above 80%,\nit means that a group exists. When the minimal number of neighbors is above\n10%, it means that no agent is isolated; when it is above 80%, it means that all\nagents are in the group. We observe that the agents learn, not only to gather, but\nalso to maintain the group and avoid being isolated. Indeed, the maximum number\nof neighbors is higher than 80% of the total number of agents, and the minimum\nis higher than 10%. We also observe that the minimum number of neighbors is\nclose to 80% at the end of the learning phase. It means that even the agents that\nare not always in the group are often in it.\nNote that these average values include the iterations starting from the begin-\nning of each cycle, where the agents are not yet gathered (i.e. around 10 iterations\nat the end of the learning phase).\nQualitative evolution. Figure 4 contains three plots that show the qualitative\nevolution of the learning for three cycles, at the beginning, middle and end of the\nlearning phase.\nIn the first figure (beginning of the learning phase), we observe that the agents\nare quite uniformly distributed: the circles are white and small, indicating few\nneighbors and no significant group formation.\nIn the second figure (middle of the learning phase), we observe that the agents\nconverge to a same position, forming a group in approximately 10 steps. The large\nblack circle indicate that at least 80% of the total number of agents are neighbors\nof the position, i.e. that a group exists. We can see that this group is maintained\nafter its formation until the end of the cycle. We also observe that the group itself\nis slowly moving during the cycle, while being maintained. We notice that there\nare very few agents outside the group after its formation.\nIn the third figure (end of the learning phase), we observe that agents still\nconverge to form a group, but the group is formed earlier than before (around 7\nsteps). The group is still maintained and still moves during the cycle. We can\nnotice even less agents outside the group than before.\nLonger learning phase. We finally study the impact of a longer learning phase:\n75 000 cycles instead of 5000.\nFigure 5 is the equivalent of Figure 2 for a longer learning phase. At the end\nof the learning, the agents are gathering faster (around 5 steps) and are less often\noutside of the group.\nFigure 6 is the equivalent of Figure 3 for a longer learning phase. We observe\nthat the minimum number of neighbors goes above 80%, which means that all the\nagents are in the group most of the time.\n3.3.2 Scalability and comparison with a hardcoded algorithm\nIn the section, we explore the scalability and robustness properties of the afore-\nmentioned learning scheme. We show that the agents that have learned Q-values\nwith default parameters in 75 000 cycles are able to gather with more agents with-\noutany new learning: we can take several agents that have learned in groups of\n10 until we obtain a group of 100.\nIn a second time, we compare this behavior with a hardcoded gathering algo-\nrithm (i.e., where the behavior is written in advance and not learned).\n•First, we compare the learned behavior to an algorithm that uses the exact\nandabsolute positions of all the agents (by opposition to relative positions\nand approximations used during learning). With this algorithm, agents al-\nways move towards the barycenter [22, 64] of all the agents. As this al-\ngorithm has an exact view on the environment, the performances are 50%\nbetter.\n•We then make a fairer and more meaningful comparison with an algorithm\nthat uses the same perceptions as the learning algorithm. With an equally\nconstrained perception of the environment, we get results that are similar\nto the learned algorithm (the learned algorithm even slightly better in terms\nof “time to form a group”). We thus show that, even with a relatively sim-\nple learning scheme, we can reach the same performances as a hardcoded\nbehavior.\nNote that, since the agents have already learned a behavior, there is no more\n“progression” visible on the plots.\nFigure 7: Time needed to form a group from random initial positions for 100\nagents on a ring of size 13 with (hardcoded algorithm). Average is 5.4 steps,\nmedian is 5.0 steps and standard deviation is 0.6.\nFigure 8: Maximum and minimum number of neighbors for 100 agents on a ring\nof size 13 (hardcoded algorithm). Maximum is black squares and minimum is\nwhite triangles. The dashed line is the minimum number of neighbors needed to\nbe considered in the group. Each point is an average over a cycle (65 steps). Av-\nerage is 90.6%, median is 91.1% and standard deviation is 6.1% for the minimum\nnumber of neighbors. Average is 96.3%, median is 96.3% and standard deviation\nis 0.7% for the maximum number of neighbors.\nFigure 9: Time needed to form a group from random initial positions for 100\nagents on a ring of size 13 (learned behavior). Average is 10.4 steps, median is\n10.0 steps and standard deviation is 5.1.\nFigure 10: Time needed to form a group from random initial positions for 100\nagents on a ring of size 13 (Q-hardcoded algorithm). Average is 12.1 steps, me-\ndian is 11.0 steps and standard deviation is 4.9.\nFigure 11: Maximum and minimum number of neighbors for 100 agents on a ring\nof size 13. Maximum is black squares and minimum is white triangles (learned\nbehavior). The dashed line is the minimum number of neighbors needed to be\nconsidered in the group. Each point is an average over a cycle (65 steps). Average\nis 40.4%, median is 16.3% and standard deviation is 31.0% for min neighbor. Av-\nerage is 87.1%, median is 86.0% and standard deviation is 5.1% for max neighbor.\nFigure 12: Maximum and minimum number of neighbors for 100 agents on a ring\nof size 13 (Q-hardcoded algorithm). Maximum is black squares and minimum is\nwhite triangles. The dashed line is the minimum number of neighbors needed to\nbe considered in the group. Each point is an average over a cycle (65 steps). Av-\nerage is 27.8%, median is 27.8% and standard deviation is 2.3% for the minimum\nnumber of neighbor. Average is 79.9%, median is 80.0% and standard deviation\nis 2.3% for the maximum number of neighbor.\nTime to create a group for 100 agents. On Figure 9, we can see the time needed\nto form a group for 100 agents on a ring of size 13. Compared to the case with 10\nagents, the time needed to form a group including 80% of the population is higher\n(around 10 steps in average). But the agents are still able to gather in a short time\n(the worst case is no more than 50 steps) most of the time: 997 times over 1000.\nNumber of neighbors for 100 agents. On Figure 11, we observe that the max-\nimum number of neighbors is higher than 80% most of the time, which means\nthat a group exists most of the time. We also observe that the minimum number\nof neighbors is often low. This means that a few agents, even if not isolated, are\nunable to join the main group.\nPerformances of the hardcoded algorithm. On Figure 73and 8, we can ob-\nserve that the hardcoded algorithm is better than the learned behavior. In average,\nthe agents gather in 5 steps with a standard deviation of 0.6. Moreover, the max-\nimum and minimum number of neighbors are very high (average: resp. 96% and\n91%). However, these good results are only possible because this algorithm uses\nthe exact and absolute positions of other agents.\nFairer comparison. To make a fairer comparison between hardcoded algorithm\nan learned behavior, we try to impose to the hardcoded algorithm the same con-\nstraints that were imposed to the learning algorithm: relative position, sector ap-\nproximation and action choice with Q-values. To do so, we compute Q-values\nwith the help of the hardcoded algorithm. Each agent decide how to act accord-\ning to the hardcoded algorithm, and Q-values are computed along the sequence of\nactions determined by the hardcoded algorithm. It allows each agent to compute\nQ-values for couples ( s,a) of states and actions. We call this algorithm the Q-\nhardcoded algorithm : the desired behavior is known in advance, but we imposes\nthe same perception constraints to the agents than the learned behavior.\nIn Figure 10, we observe that the time needed to form a group has the same\ndistribution as the learned behavior in Figure 9. The average time is even slightly\nbetter for the learned behavior (10 steps) than for the Q-hardcoded algorithm (12\nsteps). However, the standard deviation is slightly higher for learned behavior\n(5.1) than for the Q-hardcoded algorithm (4.9).\nIn Figure 10, we represent the distribution of the number of neighbors. Here\nagain, we observe that the distribution is better for the learned behavior (Fig-\nure 11) than for the Q-hardcoded algorithm (Figure 12): the average of the maxi-\nmum number neighbors is better (87% versus 80%) as well as the average of the\n3Note that the figures are intentionally numbered to keep figures 9 and 10 (resp. 11 and 12)\nside by side, in order to have a clearer comparison between these figures.\nminimum number of neighbors (40% versus 28%)4. However, the distribution of\nthe number of neighbors is more sparse for the learned behavior.\n3.4 Future works\nIn order to extend this work, it might be interesting to investigate how this multi-\nagent behavior emerges from the individual behavior of each agent, the di ↵erence\nof behavior between agents, and to quantify the importance of diversity in the\nbehavior of agents.\nAnother direction to continue this work would be to devise a way for agents\nto design or learn their own approximations of their environment. This could be\ndone through unsupervised learning [45], or with the help of the reward feedback\nfrom the environment (or by a combination of both). This automatic design of the\nperception approximation could allow to systematically find a good compromise\nbetween the reduction of the learning space and the capacity to perceive meaning-\nful di ↵erences and learn complex tasks. Neural networks may be a good modular\nframework to model these approximations functions.\nA major challenge would be to find a way to reuse the behavior learned with\nthe old approximation, instead of re-learning the behavior from scratch whenever\na change occurs in the approximation. The relative dynamics of the two timescales\n(one for the evolution of the approximation, and one for the evolution of the be-\nhavior) would also be of a particular importance.\nAcknowledgment. This work was supported in part by the Swiss National Sci-\nence Foundation (Grant 200021_169588 TARBDA).\nReferences\n[1]O. Abul, F. Polat, and R. Alhajj. Multiagent reinforcement learning using function\napproximation. IEEE Trans. Syst., Man, Cybern. C , 30(4):485–497, 2000.\n[2]Yehuda Afek, Noga Alon, Omer Barad, Eran Hornstein, Naama Barkai, and Ziv\nBar-Joseph. A biological solution to a fundamental distributed computing problem.\nScience , 331(6014):183–185, 2011.\n[3]Chrysovalandis Agathangelou, Chryssis Georgiou, and Marios Mavronicolas. A dis-\ntributed algorithm for gathering many fat mobile robots in the plane. In ACM Sym-\nposium on Principles of Distributed Computing, PODC ’13, Montreal, QC, Canada,\nJuly 22-24, 2013 , pages 250–259, 2013.\n4Many light triangles are between 60% and 80% on Figure 11, which explains the higher\naverage value.\n[4]Noa Agmon and David Peleg. Fault-tolerant gathering algorithms for autonomous\nmobile robots. In Proceedings of the Fifteenth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA 2004, New Orleans, Louisiana, USA, January 11-14,\n2004 , pages 1070–1078, 2004.\n[5]Noa Agmon and David Peleg. Fault-tolerant gathering algorithms for autonomous\nmobile robots. SIAM J. Comput. , 36(1):56–82, 2006.\n[6]Luzi Anderegg and Mark Cieliebak. The weber point can be found in linear time for\npoints in biangular configuration. 02 2003.\n[7]Hideki Ando, Yoshinobu Oasa, Ichiro Suzuki, and Masafumi Yamashita. Distributed\nmemoryless point convergence algorithm for mobile robots with limited visibility.\nIEEE Trans. Robotics and Automation , 15(5):818–828, 1999.\n[8]Hideki Ando, Yoshinobu Oasa, Ichiro Suzuki, and Masafumi Yamashita. Distributed\nmemoryless point convergence algorithm for mobile robots with limited visibility.\nIEEE Trans. Robotics and Automation , 15(5):818–828, 1999.\n[9]Ronald C Arkin. Cooperation without communication: Multiagent schema-based\nrobot navigation. Journal of Robotic Systems , 9(3):351–364, 1992.\n[10] Mostafa D. Awheda and Howard M. Schwartz. Exponential moving average\nbased multiagent reinforcement learning algorithms. Artificial Intelligence Review ,\n45(3):299–332, oct 2015.\n[11] Ozalp Babaoglu, Geo ↵rey Canright, Andreas Deutsch, Gianni A. Di Caro, Freder-\nick Ducatelle, Luca M. Gambardella, Niloy Ganguly, Márk Jelasity, Roberto Mon-\ntemanni, Alberto Montresor, and Tore Urnes. Design patterns from biology for dis-\ntributed computing. ACM Trans. Auton. Adapt. Syst. , 1(1):26–66, September 2006.\n[12] Pieter Beyens, Maarten Peeters, Kris Steenhaut, and Ann Nowe. Routing with com-\npression in wireless sensor networks: a q-learning appoach. In Proceedings of the\n5th European Workshop on Adaptive Agents and Multi-Agent Systems (AAMAS) ,\n2005.\n[13] Subhash Bhagat, Sruti Gan Chaudhuri, and Krishnendu Mukhopadhyaya. Fault-\ntolerant gathering of asynchronous oblivious mobile robots under one-axis agree-\nment. J. Discrete Algorithms , 36:50–62, 2016.\n[14] Kálmán Bolla, Tamás Kovács, and Gábor Fazekas. Gathering of fat robots with\nlimited visibility and without global navigation. In Swarm and Evolutionary Com-\nputation - International Symposia, SIDE 2012 and EC 2012, Held in Conjunction\nwith ICAISC 2012, Zakopane, Poland, April 29-May 3, 2012. Proceedings , pages\n30–38, 2012.\n[15] Zohir Bouzid, Maria Gradinariu Potop-Butucaru, and Sébastien Tixeuil. Optimal\nbyzantine-resilient convergence in uni-dimensional robot networks. Theor. Comput.\nSci., 411(34-36):3154–3168, 2010.\n[16] Bruno Bouzy and Marc Métivier. Multi-agent learning experiments on repeated\nmatrix games. In Proceedings of the 27 th International Conference on Machine\nLearning, , 2010.\n[17] Manuele Brambilla, Eliseo Ferrante, Mauro Birattari, and Marco Dorigo. Swarm\nrobotics: a review from the swarm engineering perspective. Swarm Intell , 7(1):1–\n41, jan 2013.\n[18] Nicolas Bredèche, Evert Haasdijk, and Abraham Prieto. Embodied evolution in\ncollective robotics: A review. Front. Robotics and AI , 2018, 2018.\n[19] Olivier Bu ↵et, Alain Dutech, and François Charpillet. Shaping multi-agent systems\nwith gradient reinforcement learning. Auton Agent Multi-Agent Syst , 15(2):197–220,\njan 2007.\n[20] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent\nreinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part\nC (Applications and Reviews) , 38(2):156–172, mar 2008.\n[21] Doran Chakraborty and Peter Stone. Convergence, targeted optimality, and safety\nin multiagent learning. In Proceedings of the 27th International Conference on Ma-\nchine Learning (ICML-10), June 21-24, 2010, Haifa, Israel , pages 191–198, 2010.\n[22] Benjamin Charlier. Necessary and su \u0000cient condition for the existence of a fréchet\nmean on the circle. ESAIM: Probability and Statistics , 17:635–649, 2013.\n[23] Mark Cieliebak. Gathering non-oblivious mobile robots. In LATIN 2004: Theo-\nretical Informatics, 6th Latin American Symposium, Buenos Aires, Argentina, April\n5-8, 2004, Proceedings , pages 577–588, 2004.\n[24] Mark Cieliebak, Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro. Solving\nthe robots gathering problem. In Automata, Languages and Programming, 30th\nInternational Colloquium, ICALP 2003, Eindhoven, The Netherlands, June 30 - July\n4, 2003. Proceedings , pages 1181–1196, 2003.\n[25] Mark Cieliebak, Paola Flocchini, Giuseppe Prencipe, and Nicola Santoro. Solving\nthe robots gathering problem. In Automata, Languages and Programming, 30th\nInternational Colloquium, ICALP 2003, Eindhoven, The Netherlands, June 30 - July\n4, 2003. Proceedings , pages 1181–1196, 2003.\n[26] Mark Cieliebak and Giuseppe Prencipe. Gathering autonomous mobile robots. In\nSIROCCO 9, Proceedings of the 9th International Colloquium on Structural Infor-\nmation and Communication Complexity, Andros, Greece, June 10-12, 2002 , pages\n57–72, 2002.\n[27] Caroline Claus and Craig Boutilier. The dynamics of reinforcement learning in\ncooperative multiagent systems. AAAI /IAAI, (s 746):752, 1998.\n[28] Reuven Cohen and David Peleg. Robot convergence via center-of-gravity algo-\nrithms. In Structural Information and Communication Complexity, 11th Interna-\ntional Colloquium , SIROCCO 2004, Smolenice Castle, Slowakia, June 21-23, 2004,\nProceedings , pages 79–88, 2004.\n[29] Reuven Cohen and David Peleg. Robot convergence via center-of-gravity algo-\nrithms. In Structural Information and Communication Complexity, 11th Interna-\ntional Colloquium , SIROCCO 2004, Smolenice Castle, Slovakia, June 21-23, 2004,\nProceedings , pages 79–88, 2004.\n[30] Jurek Czyzowicz, Leszek Gasieniec, and Andrzej Pelc. Gathering few fat mobile\nrobots in the plane. Theor. Comput. Sci. , 410(6-7):481–499, 2009.\n[31] Gianlorenzo D’Angelo, Gabriele Di Stefano, Ralf Klasing, and Alfredo Navarra.\nGathering of robots on anonymous grids and trees without multiplicity detection.\nTheor. Comput. Sci. , 610:158–168, 2016.\n[32] F. F. Darling. Bird flocks and the breeding cycle; a contribution to the study of avian\nsociality. Oxford, England: Macmillan , 1999.\n[33] Xavier Défago, Maria Gradinariu Potop-Butucaru, Julien Clément, Stéphane Mes-\nsika, and Philippe Raipin Parvédy. Fault and byzantine tolerant self-stabilizing mo-\nbile robots gathering - feasibility study -. CoRR , abs/1602.05546, 2016.\n[34] Anders Dessmark, Pierre Fraigniaud, Dariusz R. Kowalski, and Andrzej Pelc. De-\nterministic rendezvous in graphs. Algorithmica , 46(1):69–96, 2006.\n[35] Yoann Dieudonné and Franck Petit. Self-stabilizing deterministic gathering. In\nAlgorithmic Aspects of Wireless Sensor Networks, 5th International Workshop, AL-\nGOSENSORS 2009, Rhodes, Greece, July 10-11, 2009. Revised Selected Papers ,\npages 230–241, 2009.\n[36] Stéphane Doncieux, Nicolas Bredèche, Jean-Baptiste Mouret, and A. E. Eiben. Evo-\nlutionary robotics: What, why, and where to. Front. Robotics and AI , 2015, 2015.\n[37] Marco Dorigo and Luca Maria Gambardella. Ant colonies for the travelling sales-\nman problem. Biosystems , 43(2):73 – 81, 1997.\n[38] David J. Finton. When do di ↵erences matter? on-line feature extraction through\ncognitive economy. Cognitive Systems Research , 6(4):263–281, dec 2005.\n[39] Paola Flocchini, Evangelos Kranakis, Danny Krizanc, Nicola Santoro, and Cindy\nSawchuk. Multiple mobile agent rendezvous in a ring. In LATIN 2004: Theoretical\nInformatics, 6th Latin American Symposium, Buenos Aires, Argentina, April 5-8,\n2004, Proceedings , pages 599–608, 2004.\n[40] Paola Flocchini, Giuseppe Prencipe, Nicola Santoro, and Peter Widmayer. Gath-\nering of asynchronous robots with limited visibility. Theor. Comput. Sci. , 337(1-\n3):147–168, 2005.\n[41] Paola Flocchini, Giuseppe Prencipe, Nicola Santoro, and Peter Widmayer. Gath-\nering of asynchronous robots with limited visibility. Theor. Comput. Sci. , 337(1-\n3):147–168, 2005.\n[42] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical\nlearning , volume 1. Springer series in statistics Springer, Berlin, 2001.\n[43] Matthew R. Glickman and Katia P. Sycara. Evolutionary search, stochastic policies\nwith memory, and reinforcement learning with hidden state. In ICML , 2001.\n[44] Xiaoxiao Guo, Satinder Singh, Honglak Lee, Richard Lewis, and Xiaoshi Wang.\nDeep learning for real-time atari game play using o ✏ine monte-carlo tree search\nplanning. In Proceedings of the 27th International Conference on Neural Informa-\ntion Processing Systems , NIPS’14, pages 3338–3346, Cambridge, MA, USA, 2014.\nMIT Press.\n[45] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. Unsupervised learning. In\nThe elements of statistical learning , pages 485–585. Springer, 2009.\n[46] Simon Haykin. Neural Networks and Learning Machines Third Edition . 2008.\n[47] Junius Ho, Daniel W Engels, and Sanjay E Sarma. Hiq: a hierarchical q-learning\nalgorithm to solve the reader collision problem. In International Symposium on\nApplications and the Internet Workshops (SAINTW’06) , pages 4–pp. IEEE, 2006.\n[48] T. Horiuchi, A. Fujino, O. Katai, and T. Sawaragi. Fuzzy interpolation-based q-\nlearning with profit sharing plan scheme. In Proceedings of 6th International Fuzzy\nSystems Conference . Institute of Electrical & Electronics Engineers (IEEE), 1997.\n[49] Tomoko Izumi, Taisuke Izumi, Sayaka Kamei, and Fukuhito Ooshita. Time-optimal\ngathering algorithm of mobile robots with local weak multiplicity detection in rings.\nIEICE Transactions , 96-A(6):1072–1080, 2013.\n[50] Sayaka Kamei, Anissa Lamani, Fukuhito Ooshita, and Sébastien Tixeuil. Asyn-\nchronous mobile robot gathering from symmetric configurations without global mul-\ntiplicity detection. In Structural Information and Communication Complexity - 18th\nInternational Colloquium, SIROCCO 2011, Gdansk, Poland, June 26-29, 2011. Pro-\nceedings , pages 150–161, 2011.\n[51] Soummya Kar, José M. F. Moura, and H. Vincent Poor. Qd-learning: A collabora-\ntive distributed strategy for multi-agent reinforcement learning through consensus +\ninnovations. IEEE Transactions on Signal Processing , 61(7):1848–1862, apr 2013.\n[52] Ralf Klasing, Adrian Kosowski, and Alfredo Navarra. Taking advantage of symme-\ntries: Gathering of many asynchronous oblivious robots on a ring. Theor. Comput.\nSci., 411(34-36):3235–3246, 2010.\n[53] Ralf Klasing, Euripides Markou, and Andrzej Pelc. Gathering asynchronous oblivi-\nous mobile robots in a ring. Theor. Comput. Sci. , 390(1):27–39, 2008.\n[54] Dariusz R. Kowalski and Andrzej Pelc. Polynomial deterministic rendezvous in\narbitrary graphs. In Algorithms and Computation, 15th International Symposium,\nISAAC 2004, Hong Kong, China, December 20-22, 2004, Proceedings , pages 644–\n656, 2004.\n[55] Hung Manh La, Ronny Lim, and Weihua Sheng. Multirobot cooperative learning for\npredator avoidance. IEEE Transactions on Control Systems Technology , 23(1):52–\n63, jan 2015.\n[56] Paul Levi and Serge Kernbach. Symbiotic Multi-Robot Organisms - Reliability,\nAdaptability, Evolution , volume 7 of Cognitive Systems Monographs . Springer,\n2010.\n[57] Shouwei Li, Friedhelm Meyer auf der Heide, and Pavel Podlipyan. The impact of\nthe gabriel subgraph of the visibility graph on the gathering of mobile autonomous\nrobots. In Algorithms for Sensor Systems - 12th International Symposium on Al-\ngorithms and Experiments for Wireless Sensor Networks, ALGOSENSORS 2016,\nAarhus, Denmark, August 25-26, 2016, Revised Selected Papers , pages 62–79, 2016.\n[58] John Loch and Satinder P Singh. Using eligibility traces to find the best memoryless\npolicy in partially observable markov decision processes. In ICML , pages 323–331,\n1998.\n[59] Liam MacDermed. Scaling up game theory: Achievable set methods for e \u0000ciently\nsolving stochastic games of complete and incomplete information. In Proceedings of\nthe Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Fran-\ncisco, California, USA, August 7-11, 2011 , 2011.\n[60] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis\nAntonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep\nreinforcement learning. CoRR , abs/1312.5602, 2013.\n[61] Koichiro Morihiro, Teijiro Isokawa, Haruhiko Nishimura, and Nobuyuki Matsui.\nCharacteristics of flocking behavior model by reinforcement learning scheme. In\n2006 SICE-ICASE International Joint Conference . Institute of Electrical & Elec-\ntronics Engineers (IEEE), 2006.\n[62] Jean Oh. Multiagent Social Learning in Large Repeated Games . PhD thesis, Pitts-\nburgh, PA, USA, 2009. AAI3414065.\n[63] Shayegan Omidshafiei, Dong-Ki Kim, Miao Liu, Gerald Tesauro, Matthew Riemer,\nChristopher Amato, Murray Campbell, and Jonathan P. How. Learning to teach in\ncooperative multiagent reinforcement learning. CoRR , abs/1805.07830, 2018.\n[64] Xavier Pennec. Probabilities and statistics on riemannian manifolds: Basic tools for\ngeometric measurements. In NSIP , pages 194–198. Citeseer, 1999.\n[65] HL Prasad, Prashanth LA, and Shalabh Bhatnagar. Two-timescale algorithms for\nlearning nash equilibria in general-sum stochastic games. In Proceedings of the 2015\nInternational Conference on Autonomous Agents and Multiagent Systems , pages\n1371–1379. International Foundation for Autonomous Agents and Multiagent Sys-\ntems, 2015.\n[66] Giuseppe Prencipe. On the feasibility of gathering by autonomous mobile robots. In\nStructural Information and Communication Complexity, 12th International Collo-\nquium, SIROCCO 2005, Mont Saint-Michel, France, May 24-26, 2005, Proceedings ,\npages 246–261, 2005.\n[67] Hussein Saad, Amr Mohamed, and Tamer ElBatt. Cooperative q-learning techniques\nfor distributed online power allocation in femtocell networks. Wirel. Commun. Mob.\nComput. , 15(15):1929–1944, feb 2014.\n[68] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp\nMoritz. Trust region policy optimization. In Proceedings of the 32nd International\nConference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015 , pages\n1889–1897, 2015.\n[69] Z. Shi, J. Tu, Q. Zhang, X. Zhang, and J. Wei. The improved q-learning algorithm\nbased on pheromone mechanism for swarm robot system. In Proceedings of the\n32nd Chinese Control Conference , pages 6033–6038, July 2013.\n[70] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George\nVan Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershel-\nvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and\ntree search. Nature , 529(7587):484–489, 2016.\n[71] Herbert A Simon. Why should machines learn? In Machine learning , pages 25–37.\nSpringer, 1983.\n[72] Kazuo Sugihara and Ichiro Suzuki. Distributed algorithms for formation of geomet-\nric patterns with many mobile robots. J. Field Robotics , 13(3):127–139, 1996.\n[73] R.S. Sutton and A.G. Barto. Reinforcement learning: An introduction. IEEE Trans.\nNeural Netw. , 9(5):1054–1054, sep 1998.\n[74] Ichiro Suzuki and Masafumi Yamashita. Distributed anonymous mobile robots: For-\nmation of geometric patterns. SIAM J. Comput. , 28(4):1347–1363, 1999.\n[75] Ming Tan. Multi-agent reinforcement learning: Independent versus cooperative\nagents. In Machine Learning, Proceedings of the Tenth International Conference,\nUniversity of Massachusetts, Amherst, MA, USA, June 27-29, 1993 , pages 330–337,\n1993.\n[76] Ben-Nian Wang, Yang Gao, Zhao-Qian Chen, Jun-Yuan Xie, and Shi-Fu Chen. A\ntwo-layered multi-agent reinforcement learning model and algorithm. Journal of\nNetwork and Computer Applications , 30(4):1366–1376, nov 2007. Competitive.\n[77] Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning , 8(3-\n4):279–292, 1992.\n[78] Trevor J. Willis, Russell B. Millar, and Russell C. Babcock. Detection of spatial\nvariability in relative density of fishes: comparison of visual census, angling, and\nbaited underwater video. Marine Ecology Progress Series , 198:249–260, 2000.\n[79] Ping Xuan, Victor Lesser, and Shlomo Zilberstein. Communication in multi-agent\nmarkov decision processes. In MultiAgent Systems, 2000. Proceedings. Fourth In-\nternational Conference on , pages 467–468. IEEE, 2000.\n[80] Zhen Zhang, Dongbin Zhao, Junwei Gao, Dongqing Wang, and Yujie Dai. FMRQ-\na multiagent reinforcement learning algorithm for fully cooperative tasks. IEEE\nTrans. Cybern. , pages 1–13, 2016.\n[81] Mortaza Zolfpour-Arokhlo, Ali Selamat, Siti Zaiton Mohd Hashim, and Hossein\nAfkhami. Modeling of route planning system based on q value-based dynamic pro-\ngramming with multi-agent reinforcement learning algorithms. Engineering Appli-\ncations of Artificial Intelligence , 29:163–177, mar 2014.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kNP5uyeI3If",
"year": null,
"venue": "Bull. EATCS 2021",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kNP5uyeI3If",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Do Universities Have a Future?",
"authors": [
"Roger Wattenhofer"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "stOijEsGsV",
"year": null,
"venue": "Bull. EATCS 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=stOijEsGsV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Which tasks of a job are susceptible to computerization?",
"authors": [
"Philipp Brandes",
"Roger Wattenhofer"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oFEHzFNE3c",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=oFEHzFNE3c",
"arxiv_id": null,
"doi": null
}
|
{
"title": "EATCS Fellows 2018 - Call for Nominations",
"authors": [
"Roger Wattenhofer"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kUTl2_UiP6",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=kUTl2_UiP6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the Metastability of Quadratic Majority Dynamics on Clustered Graphs and its Biological Implications",
"authors": [
"Emilio Cruciani",
"Emanuele Natale",
"Giacomo Scornavacca"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jCFKlqPHqf",
"year": null,
"venue": "Bull. EATCS 2018",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=jCFKlqPHqf",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the Computational Power of Simple Dynamics",
"authors": [
"Emanuele Natale"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0sH3yiHze5",
"year": null,
"venue": "Bull. EATCS 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=0sH3yiHze5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Anatomy and Empirical Evaluation of Modern SAT Solvers",
"authors": [
"Karem A. Sakallah",
"João Marques-Silva"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yicDuq1Cf9",
"year": null,
"venue": "Bull. EATCS 2002",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=yicDuq1Cf9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Why is Selecting the Simplest Hypothesis (Consistent with Data) a Good Idea? A Simple Explanation",
"authors": [
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3ybBT7770wp",
"year": null,
"venue": "Bull. EATCS 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=3ybBT7770wp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Strassen's Algorithm Made (Somewhat) More Natural: A Pedagogical Remark",
"authors": [
"Ann Q. Gates",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RXlvyAOlcG",
"year": null,
"venue": "Bull. EATCS 2000",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=RXlvyAOlcG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Choosing a Physical Model: Why Symmetries?",
"authors": [
"Raul Trejo",
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2SdV-oJ5Jet",
"year": null,
"venue": "Bull. EATCS 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=2SdV-oJ5Jet",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Application of Kolmogorov Complexity to Image Compression: It Is Possible to Have a Better Compression, But It Is Not Possible to Have the Best One",
"authors": [
"S. Subbaramu",
"Ann Q. Gates",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "B0jw1xPiK1D",
"year": null,
"venue": "Bull. EATCS 2000",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=B0jw1xPiK1D",
"arxiv_id": null,
"doi": null
}
|
{
"title": "How Important is Theory for Practical Problems? A Partial Explanation of Hartmanis' Observation",
"authors": [
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "j6HWx40dCA",
"year": null,
"venue": "Bull. EATCS 1998",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=j6HWx40dCA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Kolmogorov Complexity Justifies Software Engineering Heuristics",
"authors": [
"Ann Q. Gates",
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YaWymbQP3Q",
"year": null,
"venue": "Bull. EATCS 1998",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=YaWymbQP3Q",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Human Visual Perception and Kolmogorov Complexity: Revisited",
"authors": [
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iBjhlMSK8s",
"year": null,
"venue": "Bull. EATCS 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=iBjhlMSK8s",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Encryption Algorithms Made (Somewhat) More Natural (a pedagogical remark)",
"authors": [
"Misha Koshelev",
"Vladik Kreinovich",
"Luc Longpré"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_62BwGoSp94",
"year": null,
"venue": "Bull. EATCS 1999",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=_62BwGoSp94",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On Average Bit Complexity of Interval Arithmetic",
"authors": [
"C. Hamzo",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "FP1fW_KPg6",
"year": null,
"venue": "Bull. EATCS 2003",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=FP1fW_KPg6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Can quantum computers be useful when there are not yet enough qubits?",
"authors": [
"Luc Longpré",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9wXT6ViWMW",
"year": null,
"venue": "Bull. EATCS 1996",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=9wXT6ViWMW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards a More Realistic Definition of Feasibility",
"authors": [
"D. Schirmer",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TrcaMrxQdXV",
"year": null,
"venue": "Bull. EATCS 1996",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=TrcaMrxQdXV",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Zeros of Riemann's Zeta Function are Uniformly Distributed, but not Random: An Answer to Calude's Open Problem",
"authors": [
"Luc Longpré",
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "gCvcLXm4uA",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=gCvcLXm4uA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Why some physicists are excited about the undecidability of the spectral gap problem and why should we",
"authors": [
"Vladik Kreinovich"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "5GWwg9SHmHK",
"year": null,
"venue": "Bull. EATCS 2016",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=5GWwg9SHmHK",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Approximation bounds for centrality maximization problems",
"authors": [
"Gianlorenzo D'Angelo"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r0aOCZRTElMM",
"year": null,
"venue": "Bull. EATCS 2011",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r0aOCZRTElMM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Convergent and Commutative Replicated Data Types",
"authors": [
"Marc Shapiro",
"Nuno M. Preguiça",
"Carlos Baquero",
"Marek Zawirski"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vu0qvDA6qG",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=vu0qvDA6qG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Composition in State-based Replicated Data Types",
"authors": [
"Carlos Baquero",
"Paulo Sérgio Almeida",
"Alcino Cunha",
"Carla Ferreira"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CYdmhBnF8QU",
"year": null,
"venue": "Bull. EATCS 2017",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/468/454",
"forum_link": "https://openreview.net/forum?id=CYdmhBnF8QU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Alonzo Church Award 2017 - Call for Nominations",
"authors": [
"Natarajan Shankar",
"Catuscia Palamidessi",
"Gordon D. Plotkin",
"Moshe Y. Vardi"
],
"abstract": "Alonzo Church Award 2017 - Call for Nominations",
"keywords": [],
"raw_extracted_content": "Alonzo Church Award 2017\nCall for Nominations\nDeadline : March 1, 2017.\nIntroduction: An annual award, called the \"Alonzo Church Award for Out-\nstanding Contributions to Logic and Computation\" was established in 2015 by\nthe ACM Special Interest Group for Logic and Computation (SIGLOG), the Eu-\nropean Association for Theoretical Computer Science (EATCS), the European\nAssociation for Computer Science Logic (EACSL), and the Kurt Gödel Soci-\nety (KGS). The award is for an outstanding contribution represented by a pa-\nper or by a small group of papers published within the past 25 years. This time\nspan allows the lasting impact and depth of the contribution to have been estab-\nlished. The award can be given to an individual, or to a group of individuals\nwho have collaborated on the research. For the rules governing this award, see:\nhttp://siglog.hosting.acm.org /the-alonzo-church-award-for-outstanding-contributions-\nto-logic-and-computation /\nEligibility and Nominations: The contribution must have appeared in a paper\nor papers published within the past 25 years. Thus, for the 2017 award, the cut-o \u000b\ndate is January 1, 1992. When a paper has appeared in a conference and then\nin a journal, the date of the journal publication will determine the cut-o \u000bdate.\nIn addition, the contribution must not yet have received recognition via a major\naward, such as the Turing Award, the Kanellakis Award, or the Gödel Prize. (The\nnominee(s) may have received such awards for other contributions.) While the\ncontribution can consist of conference or journal papers, journal papers will be\ngiven a preference.\nNominations for the 2017 award are now being solicited. The nominating\nletter must summarise the contribution and make the case that it is fundamen-\ntal and outstanding. The nominating letter can have multiple co-signers. Self-\nnominations are excluded. Nominations must include: a proposed citation (up\nto 25 words); a succinct (100-250 words) description of the contribution; and a\ndetailed statement (not exceeding four pages) to justify the nomination. Nom-\ninations may also be accompanied by supporting letters and other evidence of\nworthiness.\nNominations are due by March 1, 2017, and should be submitted to\[email protected]\nPresentation of the Award: The 2017 award will be presented at the CSL\nconference, the annual meeting of the European Association for Computer Sci-\nence Logic. The award will be accompanied by an invited lecture by the award\nwinner, or by one of the award winners. The awardee(s) will receive a certificate\nand a cash prize of USD 2,000. If there are multiple awardees, this amount will\nbe shared.\nAward Committee: The 2017 Alonzo Church Award Committee consists of\nthe following four members:\n\u000fNatarajan Shankar\n\u000fCatuscia Palamidessi\n\u000fGordon Plotkin (chair)\n\u000fMoshe Vardi",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "h7W9CMzS-g",
"year": null,
"venue": "Bull. EATCS 2015",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/381/361",
"forum_link": "https://openreview.net/forum?id=h7W9CMzS-g",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Renaming Problem: Recent Developments and Open Questions",
"authors": [
"Dan Alistarh"
],
"abstract": "The Renaming Problem: Recent Developments and Open Questions",
"keywords": [],
"raw_extracted_content": "TheDistributed Computing Column\nby\nStefanSchmid\nTU Berlin\nT-Labs, Germany\[email protected]\nTheRenaming Problem : Recent\nDevelopments and OpenQuestions\nDan Alistarh\nMicrosoft Research\n1 Introduction\n1.1 The Renaming Problem\nThe theory of distributed computing centers around a set of fundamental prob-\nlems, also known as tasks , usually considered in variants of the two classic mod-\nels of distributed computation: asynchronous shared-memory andasynchronous\nmessage-passing [50]. These fundamental tasks are important because, in general,\ntheir computability and complexity in a given system model gives a good measure\nof the model’s power.\nIn this article, we survey recent results and open questions regarding one of\nthe canonical distributed tasks, called renaming . Simply put, in the renaming\nproblem, a set of processes need to pick unique names from a small namespace .\nIntuitively, renaming can be seen as the dual of the classic distributed consensus\nproblem [48]: if solving consensus means that processes need to agree on a sin-\ngle value, renaming asks participants to disagree in a constructive way, by each\nreturning a distinct value from a small space of options.\nMore formally, the renaming problem assumes that processes start with unique\ninitial names from a large, virtually unbounded namespace,1and requires each\nprocess to eventually return a name (the termination condition), and that the names\nreturned should be unique (the uniqueness condition). The size of the resulting\nnamespace should be at most M>0, which is a parameter given in advance. The\nnamespace size Mshould only depend on n, the maximum number of participating\nprocesses.\nThe adaptive version of renaming requires the size of the namespace Mto\nonly depend on k, the number of processes actually taking steps in the current\nexecution, also known as the contention in the execution. If the range of the\nnamespace matches exactly the number of participating processes, renaming is\n1In the absence of unique initial identifiers, it is known that renaming is impossible [49].\nsaid to be strong , and the namespace is said to be tight. Otherwise, renaming is\nloose . Intuitively, a tight namespace is desirable since it minimizes the number of\n“wasted” names, which are allocated but go unused; later, we will see that strong\nrenaming algorithms can in fact be used to implement other distributed objects,\nsuch as counters and mutual exclusion.\nThe reader may now want to pause and briefly consider how to solve this prob-\nlem. One natural idea is for each participant to pick names at random between\n1 and M. Assuming we have a way of handling name collisions (usually done\nthrough auxiliary test-and-set orsplitter objects, which we describe later), pro-\ncesses may simply re-try new random names until successful. Notice however\nthat the relationship between n, the number of participants, and M, the range of\navailable names, critically influences the complexity of this procedure. If Mis\nmuch larger than n, for instance M\u0015n2, then, by standard analysis, choices will\nalmost never collide, and therefore each completes within a constant number of\ntrials. If M=Cn, forC>1 constant, then the probability of a collision is constant\n<1 in each trial, and therefore each participant will complete within O(logn) tri-\nals, with high probability. A particularly interesting case is when M=n, i.e. we\nwant a tight namespace. In this case, it appears likely that at least one process\nwill have to try a large fraction of the names before succeeding, i.e. run for linear\ntime. For this unlucky participant, this strategy is no better than trying out all\nnames sequentially, in some order.\nThe basic intuition above can be turned, with some care, into working renam-\ning algorithms [10]. It also sugests that there is a trade-o \u000bbetween the size of\nthe namespace we wish to achieve, and the complexity of our algorithm. In the\nfollowing, we will see that this trade-o \u000bis somewhat slanted in favor of random-\nization: we are able to attain a tight namespace in logarithmic worst-case expected\ntime, but the (deterministic) worst-case running time for renaming is linear, even\nfor large namespace size.\nBefore we delve into the details of these results, let us first cover some histor-\nical background. The renaming problem was formally introduced more than 25\nyears ago [17]. A significant amount of research, e.g. [25, 29, 51, 42, 3, 20, 37,\n54], has studied the solvability and complexity of renaming in an asynchronous\nenvironment. In particular, tight, orstrong deterministic renaming, where the size\nof the namespace is exactly n, is known to be impossible using only read-write\nregisters [42, 30]. In fact, ( n+t\u00001) is the best achievable namespace size when\ntprocesses may crash [30, 31]. The proof of this result is very interesting, and\nquite complex, as it requires the use of complex topological techniques [42]. As\nfor consensus, this impossibility result can be circumvented through the use of\nrandomization: there exist randomized renaming algorithms that ensure a tight\nnamespace of nnames, guaranteeing name uniqueness in all executions and ter-\nmination with probability 1, e.g. [37].\nThe complexity of renaming has also been the focus of significant research\ne\u000bort, e.g. [54, 27, 51, 52, 3, 20, 37, 33, 1]. In particular, much of this work\nconsidered the shared-memory model, perhaps due to the simpler way to express\nthe time complexity of an algorithm. However, in spite of this e \u000bort, until recently,\nno time optimality results were known for shared-memory renaming, either for\nrandomized or deterministic algorithms.\n1.2 Recent Developments\nIn the following, we will survey a recent series of papers [10, 6], giving tight\nbounds for the time complexity of renaming in asynchronous shared-memory.2\nOur survey covers some of the results from these papers, and adopts their notation\nand presentation for the technical content.\nSpecifically, for deterministic algorithms, reference [6] gave a linear lower\nbound on the time complexity of renaming into any namespace of sub-exponential\nsize. This bound can be matched by previously known algorithms. e.g. [51, 52].\n(See Section 3 for a detailed discussion.) For randomized algorithms, [6] gave\ntight logarithmic upper and lower bounds on the time complexity of adaptive re-\nnaming into a namespace of linear size. Together, these results give an exponen-\ntial time complexity separation between deterministic and randomized implemen-\ntations of renaming.\nIt is also interesting to study connections between renaming and implementa-\ntions of other shared objects. Since renaming can be solved trivially using objects\nwith stronger semantics, such as stacks, queues, or counters supporting fetch-and-\nincrement, lower bounds for renaming also apply to these widely-used, practical\nobjects. Thus, the above results can be used to match or improve the previously\nknown lower bounds for these problems (see Table 1 for an overview), but also\nto obtain e \u000ecient implementations of more complex shared objects. Due to space\nconstraints, we refer the reader to [6] for the latter constructions.\nConceptually, the improved upper and lower bounds are based on new connec-\ntions between renaming and other fundamental objects: sorting networks [45] and\nmutual exclusion [36]. Specifically, the first step is a construction showing that\nsorting networks can be used to obtain optimal-time solutions for strong adaptive\nrandomized renaming. Further, it can be shown that the resulting algorithm can\nbe extended to an e \u000ecient solution to mutual exclusion.\nTo obtain the linear lower bound on deterministic renaming, we can re-trace\nthe previous argument: we start from a known linear lower bound on the time com-\nplexity of mutual exclusion, and derive by reduction a lower bound on renaming.\n2In this model, time is measured in terms of number of steps , that is, shared-memory operations\nperformed by a processor until completion.\nShared Object Lower Bound TypeMatching\nAlgorithmsNew Result\nDeterministic ck-renaming\n(k) Local [52] Yes\n\n(klog(k=c)) Global - Yes\nRandomized ck-renaming \n(klog(k=c)) Global Section 5 Yes\nc-Approximate Counter \n(klog(k=c)) Global [15] Yes\nFetch-and-Increment\n(k) Local [51] Improves on [39]\n\n(klogk) Global Section 5 Improves on [21]\nQueues and Stacks\n(k) Local [41] Improves on [39]\n\n(klogk) Global - Improves on [21]\nFigure 1: Summary of the lower bound results and relation to previous work.\nThe lower bound on the time complexity of randomized renaming follows from a\nseparate information-based argument.\n2 Model and Problem Statements\n2.1 The Asynchronous Shared Memory Model\nIn this section, we introduce the asynchronous shared memory model [24], [50] ,\nand the cost measures we will use for the analysis of algorithms.\nAsynchronous Shared Memory. We consider the standard asynchronous shared-\nmemory model, in which a set of nprocesses \u0005 =fp1;:::; pngcan communicate\nthrough operations on shared multi-writer multi-reader atomic registers. We will\ndenote by kthecontention in an execution, i.e. the actual number of processes\nthat take steps in the execution. Obviously, k\u0014nthroughout.\nProcesses follow an algorithm, which is composed of instructions . Each in-\nstruction consists of some local computation, which may include an arbitrary\nnumber of local coin flips, and one shared memory operation, such as a read or\nwrite to a register, which we call a shared-memory step. A number of t<npro-\ncesses may fail by crashing. (Throughout this paper, we assume this upper bound\nist=n\u00001.) A failed process does not execute any further instructions. A process\nthat does not crash during an execution is correct .\nIdentifiers. Initially, each process piis assigned a unique initial identifier idi,\nwhich, for simplicity, is an integer. We will assume that the space of initial iden-\ntifiers is of infinite size. This models the fact that, in real systems, processes\nmay use identifiers from a very large space, such as the space of UNIX process\nidentifiers, or the set of all IP addresses.\nWait-Freedom. An algorithm is wait-free if it ensures that every method call\nby a correct process returns within a finite number of steps [43]. Throughout this\npaper, we will consider wait-free algorithms.\nSchedules and Adversaries. The order in which processes take steps and issue\nevents is determined by an external abstraction called a scheduler , over which\nprocesses do not have control. In the following, we will consider the scheduler\nas an adversary , whose goal is to maximize the cost of the protocol (generally\nconsidered to be the number of steps). Thus, we will use the terms adversary\nand scheduler interchangeably. The adversary controls the schedule , which is\na (possibly infinite) sequence of process identifiers. If process piis in position\n\u001cof the sequence, then this implies that piis active at time \u001c. The adversary\nhas the freedom to schedule any interleaving that complies with the given model.\nWe assume an asynchronous model, therefore the adversary may schedule any\ninterleaving of process steps.\nConsequently, an execution is a sequence of all events and steps issued by\nprocesses in a given run of an implementation. Every execution has an associated\nschedule, which yields the order in which processes are active in the execution.\nFor deterministic algorithms, the schedule completely determines the execution.\nFor randomized algorithms, di \u000berent assumptions on the relation between the\nscheduler and the random coin flips that processes perform during an execution\nmay lead to di \u000berent results. We will assume that the adversary controlling the\nschedule is the standard strong adversary, which observes the results of the local\ncoin flips, together with the state of all processes, before scheduling the next pro-\ncess step (in particular, the interleaving of process steps may depend on the result\nof their coin flips).\nComplexity Measures. We measure complexity in terms of process steps, where\neach shared-memory operation is counted as one step. Thus, the (individual) step\ncomplexity of an algorithm is the worst-case number of steps that a single process\nmay have to perform in order to return from an algorithm, including invocations to\nlower-level shared objects. The total step complexity is the total number of shared\nmemory operations that all participating processes perform during an execution.\nFor randomized algorithms, we will analyze the worst-case expected number of\nsteps that a process may perform during an execution as a consequence of the\nadversarial scheduler, or give more precise probability bounds for the number of\nsteps performed during an execution.\n2.2 Problem Statements\nWe now present the definitions and sequential specifications of the problems and\nobjects considered in this paper.\nRenaming. The renaming problem , first introduced in [17], is defined as fol-\nlows. Each of the nprocesses has initially a distinct identifier iditaken from a do-\nmain of potentially unbounded size M, and should return an output name oifrom\na smaller domain. (Note that the index iis only used for description purposes, and\nis not known to the processes.) Given an integer T, an object ensuring determin-\nistic renaming into a target namespace of size T, also called a T-renaming object,\nguarantees the following properties.\n1.Termination : In every execution, every correct process returns a name.\n2.Namespace Size : Every name returned is from 1 to T.\n3.Uniqueness : Every two names returned are distinct.\nTherandomized renaming problem relaxes the termination condition, ensur-\ningrandomized termination : with probability 1, every correct process returns a\nname. The other two properties stay the same.\nThe domain of values returned, which we call the target namespace , is of size\nT. In the classical renaming problem [17], the parameter Tmay not depend on the\nrange of the original names. On the other hand, it may depend on the parameter n\nand on the number of possible faults t.\nForadaptive renaming, the size of the resulting namespace, and the complex-\nity of the algorithm, should only depend on the number of participating processes\nkin the current execution. In some instances of the problem, processes are as-\nsumed not to know the maximum number of processes n, whereas in other in-\nstances an upper bound on nis provided. (In this paper, we consider the slightly\nharder version in which the upper bound on nis not provided.)\nIf the size of the namespace matches exactly the number of participating pro-\ncesses, then we say that the target namespace is tight. Consequently, the strong\nrenaming problem requires that the processes obtain unique names from 1 to n,\ni.e.T=n. The strong adaptive renaming problem requires that kparticipating\nprocesses obtain consecutive names 1 ;2;:::; k. Thus, strong adaptive renaming is\nthe version of the problem with the largest number of constraints. To distinguish\nthe classical renaming problem from the adaptive version, we will denote the clas-\nsical version, where nis given and complexity and namespace depend on n, as the\nnon-adaptive renaming problem.\n1Variable :;\n2Value , a binary atomic register,\n3initially 0;\n4procedure test-and-set();\n5ifValue =0then\n6 Value 1;\n7 return 0;\n8else\n9 return 1;\nFigure 2: Sequential specification of a one-\nshot test-and-set object.1Variable :;\n2V, a register, with initial value ?;\n3procedure compare-and-swap(\noldV;newV );\n4s V;\n5ifoldV =sthen\n6 V newV ;\n7 return s;\n8else\n9 return s;\nFigure 3: Sequential specification of the\ncompare-and-swap object.\nTest-and-Set. Thetest-and-set object, whose sequential specification is given in\nFigure 2, can be seen as a tournament object for nprocesses. In brief, the object\nhas initial value 0, and supports a single test-and-set operation, which atomically\nsets the value of the object to 1, returning the value of the object before the in-\nvocation. Notice that at most one process may winthe object by returning the\ninitial value 0, while all other processes losethe test-and-set by returning 1. A key\nproperty is that no losing test-and-set operation may return before the winning\noperation is invoked.\nMore precisely, a correct deterministic implementation of a single-use test-\nand-set object ensures the following properties:\n1. (Validity.) Each participating process may return one of two indications: 0,\nor 1.\n2. (Termination.) Each process accessing the object eventually returns or crashes.\n3. (Linearization.) Each execution has a linearization order Lin which each\ninvocation of test-and-set is immediately followed by a response (i.e., is\natomic), such that the first response is either 0 or the caller crashes, and no\nreturn value of 1 can be followed by a return value of 0.\n4. (Uniqueness.) At most one process may return 0.\nForrandomized test-and-set, the termination condition is replaced by the fol-\nlowing randomized termination property: with probability 1, each process access-\ning the object eventually returns or crashes. The other requirements stay the same.\nCompare-and-swap. Thecompare-and-swap object can be seen a generaliza-\ntion of test-and-set, whose underlying register supports multiple values (as op-\nposed to only 0 and 1). Its sequential specification is presented in Figure 3. More\nprecisely, a compare-and-swap object exports the following operations:\n\u000fread and write, having the same semantics as for registers,\n\u000fcompare\u0000and\u0000swap (oldV;newV ), which compares the state sof the ob-\nject to the value oldV , and either (1) changes the state of the object to newV\nand returns oldV ifs=oldV , or (b) returns the state sifs,oldV .\nNotice that the compare-and-swap object can be seen as an augmented register,\nwhich also supports the conditional compare-and-swap operation. Also note that\nit is trivial to implement a test-and-set object from a compare-and-swap object.\nMutual Exclusion. The goal of the mutual exclusion (mutex) problem is to al-\nlocate a single, indivisible, non-shareable resource among nprocesses. A process\nwith access to the resource is said to be in the critical section . When a user is not\ninvolved with the resource, it is said to be in the remainder section . In order to\ngain admittance to the critical section, a user executes an entry section ; after it is\ndone with the resource, it executes an exit section . Each of these sections can be\nassociated with a partitioning of the code that the process is executing.\nEach process cycles through these sections in the order: remainder, entry, crit-\nical, and exit. Thus, a process that wants to enter the critical section first executes\nthe entry section; after that, it enters the critical section, after which it executes the\nexit section, returning to the remainder section. We assume that in all executions,\neach process executes this section pattern infinitely many times. For simplicity,\nwe assume that the code in the remainder section is trivial, and every time the\nprocess is in this section, it immediately enters the entry section. An execution is\nadmissible if for every process pi, either pitakes an infinite number of steps, or\npi’s execution ends in the remainder section. A configuration at a time\u001cis given\nby the code section for each of the processes at time \u001c.\nAn algorithm solves mutual exclusion with no deadlock if the following hold.\nWe adopt the definition of [24].\n\u000fMutual exclusion : In every configuration of every execution, at most one\nprocess is in the critical section.\n\u000fNo deadlock : In every admissible execution, if some process is in the entry\nsection in a configuration, then there is a later configuration in which some\nprocess is in the critical section.\n\u000fNo lockout (Starvation-free) : In every admissible execution, if some process\nis in the entry section in a configuration, then there is a later configuration\nin which the same process is in the critical section.\n\u000fUnobstructed exit : In every execution, every process returns from the exit\nsection in a finite number of steps.\nIn this paper, we focus on shared-memory mutual exclusion algorithms. As\nfor renaming, there exists a distinction between adaptive and non-adaptive solu-\ntions. A classical, non-adaptive, mutual excusion algorithm is an algorithm whose\ncomplexity depends on n, the maximum number of processes that may participate\nin the execution, which is assumed to be known by the processes at the beginning\nof the execution. On the other hand, an adaptive mutual exclusion algorithm is\nan algorithm whose complexity may only depend on the number of processes k\nparticipating in the current execution.\n3 A Brief History of Renaming\nMessage-passing Models. The renaming problem, defined in Section 2.2, was\nintroduced by Attiya et al. [17], in the asynchronous message-passing model. The\npaper presented a non-adaptive algorithm that achieves (2 n\u00001) names in the pres-\nence of t<n=2 faults, and showed that a tight namespace of nnames cannot be\nachieved in an asynchronous system with crash failures. It also introduced and\nstudied a version of the problem called order-preserving renaming, in which the\nfinal names have to respect the relative order of the initial names.\nRenaming has been studied in a variety of models and under various timing as-\nsumptions. For synchronous message-passing systems, Chaudhuri et al. [32] gave\na wait-free algorithm for strong renaming in O(logn) rounds of communication,\nand proved that this upper bound is asymptotically tight if the number of process\nfailures is t\u0014n\u00001 and the algorithm is comparison-based , i.e. two processes may\ndistinguish their states only through comparison operations. Attiya and Djerassi-\nShintel [19] studied the complexity of renaming in a semi-synchronous message-\npassing system, subject to timing faults. They obtained a strong renaming algo-\nrithm with O(logn) rounds of broadcast and proved a \n(logn) time lower bound\nwhen algorithms are comparison-based or when the initial namespace is large\nenough compared to n. Both these algorithms can be made adaptive, to obtain a\nrunning time of O(logk). Okun [53] presented a strong renaming algorithm that\nis also order-preserving , with O(logn) time complexity. The algorithm exploits a\nnew connection between renaming and approximate agreement [38]. Alistarh et\nal. [11] analyzed Okun’s algorithm and showed that it is also early-deciding , i.e.\nits running time can adapt to the number of failures f\u0014n\u00001 in the execution.\nIn particular, they showed that the algorithm terminates in a constant number of\nrounds, if f<pn, and in O(log f) rounds otherwise. Recent work in the same\nmodel [12] has shown that a expected O(log log n) time can be obtained using\nrandomization.\nReturning to the asynchronous message-passing model, Alistarh, Gelashvili,\nand Vladu [13] recently gave a randomized solution which solves tight renam-\ning in O(log2n) rounds and O(n2) messages. They also show that this message\ncomplexity is optimal.\nShared-Memory Models. The first shared-memory renaming algorithm was\ngiven by Bar-Noy and Dolev [25], who ported the synchronous message-passing\nalgorithm of Attiya et al. [17] to use only reads and writes. They obtained an\nalgorithm with namespace size ( k2+k)=2 that uses O(n2) steps per operation, and\nan algorithm with a namespace size of (2 k\u00001) using O(n\u00014n) steps per operation.\nEarly work on lower bounds focused on the size of the namespace that can\nbe achieved using only reads and writes. Burns and Peterson [29] proved that, for\nanyT(n)<2n\u00001,long-lived renaming3in a namespace of size T(n) is impossible\nin asynchronous shared memory using reads and writes. They also gave the first\nlong-lived (2 n\u00001)-renaming algorithm. (However, the complexity of this algo-\nrithm depends on the size of the initial namespace, which is not allowed by the\noriginal problem specification [17].) In a landmark paper, Herlihy and Shavit [42]\nused algebraic topology to show that there exist values of nfor which wait-free\n(2n\u00002)-renaming is impossible. Recently, Castañeda and Rajsbaum [30], [31]\ngave a full characterization, proving that if nis a prime power, then target names-\npace size T(n)\u00152n\u00001 is necessary, and, otherwise, there exists an algorithm\nwith 2 n\u00002 namespace size.\nA parallel line of work [49], [46] studied anonymous renaming, where pro-\ncesses do not have initial identifiers and start in identical state. In this case, renam-\ning cannot be achieved with probability 1 using only reads and writes, since one\ncannot distinguish between processes in the same state, and thus two processes\nmay always decide on the same name with non-zero probability.\nLater work focused on the time-namespace size trade-o \u000b. Moir and Anderson\nappear to be the first to use deterministic splitters to solve renaming [51]. Afek\nand Merritt [3] presented an adaptive read-write renaming algorithm with optimal\nnamespace of size (2 k\u00001), and O(k2) step complexity. Attiya and Fouren [20] gave\nan adaptive (6 k\u00001)-renaming algorithm with O(klogk) step complexity. Chlebus\nand Kowalski [33] gave an adaptive (8 k\u0000logk\u00001)-renaming algorithm with O(k)\nstep complexity. For long-lived adaptive renaming, there exist implementations\n3Thelong-lived version of renaming allows processes to release names as well as to acquire\nthem.\nwith O(k2) time complexity for renaming into a namespace of size O(k2), e.g. [1].\nThe fastest such algorithm with optimal (2 k\u00001) namespace size has O(k4) step\ncomplexity [20].\nThe time lower bound in Section 6 shows that linear-time deterministic al-\ngorithms are in fact time optimal (since they ensure namespaces of polynomial\nsize). On the other hand, the existence of a deterministic read-write algorithm\nwhich achieves both an optimal namespace and linear time complexity is an open\nproblem.\nThe relation between renaming and stronger primitives such as fetch-and-\nincrement or test-and-set was investigated by Moir and Anderson [51]. Fetch-\nand-increment can be used to solve renaming trivially, since each process can\nreturn the result of the operation plus 1 as its new name. Renaming can be solved\nby using an array of test-and-set objects, where each process accesses test-and-set\nobjects until winning the first one. The process then returns the index of the test-\nand-set object that it has acquired. Moir and Anderson [51] also present imple-\nmentations of renaming from registers supporting set-first-zero and bitwise-and\noperations. In this paper, the authors also notice the fact that adaptive tight re-\nnaming can solve mutual exclusion. (This connection is also mentioned in [18].)\nUsing load-linked and store-conditional primitives, Brodsky et al. [28] gave a\nlinear-time algorithm with a tight namespace. (Their paper also presents an ef-\nficient synchronous shared-memory algorithm.)\nRandomization is a natural approach for obtaining names, since random coin\nflips can be used to “balance” the processes’ choices. A trivial solution when nis\nknown is to have processes try out random names from 1 to n2. Name uniqueness\ncan be validated using deterministic splitter objects [14], and the algorithm uses\na constant number of steps in expectation, since, by the birthday paradox, the\nprobability of collision is very small. The feasibility of randomized renaming in\nasynchronous shared memory was first considered by Panconesi et al. [54]. They\npresented a non-adaptive wait-free solution with a namespace of size n(1+\u000f) for\n\u000f >0 constant, with expected O(Mlog2n) running time, where Mis the size of\nthe initial namespace.\nA second paper to analyze randomized renaming was by Eberly et al. [37]. The\nauthors obtain a strong non-adaptive renaming algorithm based on the randomized\nwait-free test-and-set implementation by Afek et al. [2]. Their algorithm is long-\nlived, and is shown to have amortized step complexity O(nlogn). The average-\ncase total step complexity is \u0002(n3).\nA paper by Alistarh et al. [10] generalized the approach by Panconesi et\nal. [54] by introducing a new, adaptive test-and-set implementation with loga-\nrithmic step complexity, and a new strategy for the processes to pick which test-\nand-set to compete in: each process chooses a test-and-set between 1 and nat\nrandom. The authors prove that this approach results in a non-adaptive tight algo-\nrithm with O(nn) total step complexity.4(However, in this algorithm, individual\nprocesses may still perform a linear number of accesses.) A modified version\nof this approach generates an adaptive algorithm with similar complexity, which\nensures a loose namespace of size (1 +\u000f)k, for\u000f > 0 constant. Recent work\nby Alistarh, Aspnes, Giakkoupis and Woelfel [8] showed that, if we allow the\nalgorithm to break the namespace requirement with some probability, then we\ncan solve renaming in expected O(log log n) time, by using a multi-level random\nchoice strategy. This strategy can also be extended to adaptive algorithms, with a\nsimilar running time.\nThe renaming network algorithm presented in this paper first appeared in [7].\nIt is the first algorithm to achieve strong adaptive renaming in sub-linear time,\nimproving exponentially on the time complexity of previous solutions. The same\npaper shows that this algorithm is in fact time-optimal. The fact that any sorting\nnetwork can be used as a counting network when only one process enters on each\nwire was observed by Attiya et al. [22] to follow from earlier results of Aspnes et\nal. [16]; this is equivalent to our use of sorting networks for non-adaptive renaming\nin Section 5.1.1. The lower bounds in this paper first appeared in [9].\nRecent work also looked into the space complexity of this problem. Refer-\nence [40] gives linear lower bounds using a novel version of covering arguments,\nwhile reference [35] gave the first space-optimal renaming algorithm, which uses\nonly O(n) registers.\n4 Renaming Building Blocks\nIn this section, we illustrate some of the main building blocks developed for re-\nnaming algorithms by way of example. We present a randomized algorithm which\nrenames into a namespace of size polynomial in k, with logarithmic step complex-\nity in expectation. This algorithm can also be extended to solve adaptive test-and-\nset [10], and will be a useful sub-routine for achieving a tight namespace in loga-\nrithmic time. We focus on the structure of the algorithm; its proof of correctness\nfollows from the original analysis [10].\n4.1 Deterministic and Randomized Splitters\nThedeterministic splitter object, was introduced by Lamport to solve mutual ex-\nclusion e \u000eciently in the absence of contention [47]. This object, whose structure\nis given in Figure 5, provides the following semantics.\n\u000fEvery correct process returns either stop, left, or right.\n4In the following, by nwe denote logcn, for some integer c\u00151.\nSplitters -> Renaming•A triangular matrix of splitters•Traverse matrix, starting top left, according to the values returned by splitters•Until process stopsin some splitter.rightleft\n1812345\n16\n78910\n1112131415Figure 4: Structure and labeling of a deterministic splitter network.\n\u000fAt most one process returns stop.\n\u000fIf a single correct process calls split, then it returns stop.\n\u000fIn an execution in which k\u00151 processes access the object, at most k\u00001\nprocesses return left, and at most k\u00001 processes return right.\nA very interesting use of the deterministic splitter is in the context of the re-\nnaming problem: Moir and Anderson [51] noticed that splitters connected in a\nrectangular grid, as depicted in Figure 4, solve renaming.\nMore precisely, the key property of the splitter is that it changes direction for\nat least one of the calling processes, they show that a single process may access at\nmost k\u00001 distinct splitter objects in the grid before returning stop at one of these\nobjects. Given a labelling of the splitters as in Figure 4, each process may return\nthe label of the splitter it returned stop from as its new name. A simple analysis\nyields that the names returned are from 1 to k2.\nTherandomized splitter object is a weak synchronization primitive which al-\nlows a process to acquire it if is running alone, which splits the participants prob-\nabilistically if more than one process accesses the object. More precisely, a ran-\ndomized splitter has the following properties.\n\u000fAt most one process returns stop.\n\u000fIf a single correct process calls split, then the process returns stop.\n\u000fIf a correct process does not return stop, then the probability that it returns\nleft equals the probability that it returns right, which equals 1 =2.\nThe randomized splitter was introduced in [23], where it was shown that it can\nbe implemented wait-free using registers. Next, we will see how splitters can be\nused to solve renaming in expected logarithmic time.\n4.2 The RatRace Adaptive Renaming Algorithm\nDescription. The algorithm is based on a binary tree structure, of unbounded\nheight. Each node vin this tree contains a randomized splitter object RS v. Each\nrandomized splitter RS vhas two pointers, referring to randomized splitter objects\ncorresponding to the left and right children of node v. Thus, if node vhas children\n`(left) and r(right), the left pointer of RS vwill refer to RS`, while the right\npointer refers to RS r. Any process pireturning left from the randomized splitter\nRS vwill call the split procedure of RS`, while processes returning right will call\nthe split procedure of RS r.\nProcesses start at the root node of the tree, and proceed left or right (with prob-\nability 1=2) through the tree until first returning stop at the randomized splitter RS v\nassociated to some node v. We say that a process acquires a randomized splitter\nsif it returns stop at the randomized splitter s. Once it acquires a randomized\nsplitter, the process stops going down the tree. The key property of this process\nis that, in an execution with kparticipants, each reaches depth at most O(logk)\nin the tree before acquiring a name, with high probability, and that every process\nreturns, with probability 1.\nDecision. Each process that acquires a randomized splitter in the tree returns the\nlabel of the corresponding node in a breadth-first search labelling of the primary\ntree.\nProperties. The RatRace renaming algorithm ensures the following properties.\nThe proof follows in a straightforward manner from the analysis for the test-and-\nset version of RatRace [10]. We provide a short proof here for completeness.\nName uniqueness follows since no two processes may stop at the same ran-\ndomized splitter, which is one of the basic properties of this object [23]. We now\nprovide a probabilistic upper bound on namespace size.\n[RatRace Renaming] For c\u00153 constant, the RatRace renaming algorithm de-\nscribed above yields an adaptive renaming algorithm ensuring a namespace of size\nThe splitter[Moir& Anderson, 1995]\nSolo-winner: A process stops if it is alone in the splitter.stopleftrightkprocesses\u0003k-1 processes\u0003k-1 processes\u00031process\n16Figure 5: Deterministic splitter.\nstop≤ 1 processk processesPr [left] = 1/ 2Pr [right] = 1/ 2 Figure 6: Randomized splitter.\nO(kc) inO(logk) steps, both with high probability in k. Every process eventually\nreturns with probability 1.\nProof. Pick a process p, and assume that the process reaches depth din the binary\ntree without acquiring a randomized splitter. By the properties of the randomized\nsplitter, and by the structure of the algorithm, this implies that there exists (at\nleast) one other process qwhich follows exactly the same path through the tree as\nprocess p. Necessarily, qmust have made the same random choices as process p,\nat every randomized splitter on the path.\nLetkbe the number of participants in the execution, and pick d=clogk,\nwhere c\u00153 is a constant. The probability that an arbitrary process makes exactly\nthe same clogkrandom choices as pis (1=2)clogk=(1=k)c. By the union bound,\nthe probability that there exists another process qwhich makes the same choices\naspis at most ( k\u00001)(1=k)c\u0014(1=k)c\u00001. Applying the union bound again, we obtain\nthat the probability that there exists a process pwhich takes more than clogksteps\nis at most (1 =k)c\u00002. This also implies that every process returns a name between\n1 and kcwith probability 1 \u0000(1=k)c. The termination bound follows by the same\nargument, by taking d!1 . \u0003\n5 Adaptive Strong Renaming in Logarithmic Expected\nTime\nIn the previous section, we have seen a way of obtaining a namespace that is\npolynomial in the number of participants k, in logarithmic time. We now give a\nway of tightening the namespace to an optimal one, of size k, while preserving\nlogarithmic running time. Logarithmic time is in fact optimal [6].\ninput xinput youtput x' = min ( x, y )output y' = max ( x, y )Figure 7: Structure of a comparator.\nRenaming networks. The key ingredient behind the algorithm is a connection\nbetween renaming and sorting networks , a data structure used for sorting se-\nquences of numbers in parallel. In brief, we start from a sorting network, and\nreplace the comparator objects with two-process test-and-set objects, to obtain an\nobject we call a renaming network . The algorithm works as follows: each pro-\ncess is assigned a unique input port (running a loose renaming algorithm such as\nthe one from the previous section), and follows a path through the network deter-\nmined by leaving each two-process test-and-set on its higher output wire if it wins\nthe test-and-set, and on its lower output wire if it loses. The output name is the\nindex (from top to bottom) of the output port it reaches.\nThere are two major obstacles to turning this idea into a strong adaptive re-\nnaming algorithm. The first is that this construction is not adaptive. Since the\nstep complexity of running the renaming network depends on the number of input\nports assigned, then, if we simply use the processes’ initial names to assign input\nports, we could obtain an algorithm with unbounded worst-case step complexity,\nsince the space of initial identifiers is potentially unbounded. The second obsta-\ncle is that a regular sorting network construction has a fixed number of input and\noutput ports, therefore the construction would not adapt to the contention k. Since\nwe would like to avoid assuming any bound on the contention, we need to build\na sorting network that “extends\" its size as the number of participating processes\nincreases.\nIn the following, we show how to overcome these problems, and obtain a\nstrong adaptive renaming algorithm with complexity O(logk), with high proba-\nbility in k.5\n5Notice that, if the contention kis small, the failure probability O(1=kc) with c\u00152 constant\nmay be non-negligible. In this case, the failure probability can be made to depend on the parameter\nnat the cost of a multiplicative \u0002(logn) factor in the running time of the algorithm.\n1Shared :;\n2Renaming network R;\n3procedure rename (vi);\n4w input wire corresponding to viinR;\n5while w is not an output wire do\n6 T next test-and-set on wire wofR;\n7 res T:test\u0000and\u0000set( );\n8 ifres=0then\n9 w output wire x0ofT;\n10 else\n11 w output wire y0ofT;\n12 return w:index ;\nFigure 8: Pseudocode for executing a renaming network.\n5.1 Renaming using a Sorting Network\nWe now give a strong renaming algorithm based on a sorting network. For sim-\nplicity, we describe the solution in the case where the bound on the size of the\ninitial namespace, M, is finite and known. We circumvent this limitation in Sec-\ntion 5.2.\n5.1.1 Renaming Networks\nWe start from an arbitrary sorting network with Minput and output ports, in which\nwe replace the comparators with two-process test-and-set objects. The structure\nof a comparator is given in Figure 7 (please see standard texts, e.g. [34], for back-\nground on sorting networks). The two-process test-and-set objects maintain the\ninput ports x;yand the output ports x0;y0. We call this object a renaming network .\nWe assume that each participating process pihas a unique initial value vifrom\n1 toM. (These values can be the initial names of the processes, or names obtained\nfrom another renaming algorithm, as described in Section 5.2). Also part of the\nprocess’s algorithm is the blueprint of a renaming network with Minput ports,\nwhich is the same for all participants.\nWe use the renaming network to solve adaptive tight renaming as follows.\n(Please see Figure 8 for the pseudocode.) Each participating process enters the\nexecution on the input wire in the sorting network corresponding to its unique\ninitial value vi. The process competes in two-process test-and-set instances as\nfollows: if the process returns 0 (wins) a two-process test-and-set, then it moves\n“up” in the network, i.e. follows output port x0of the test-and-set; otherwise it\n1 2 3 4 \nFigure 9: Execution of a renaming network. The two processes start at arbitrary distinct\ninput ports, and proceed through the network until reaching an output port. The two-\nprocess test-and-set objects are depicted as locks. A two process test-and-set object is\nhighlighted if it has been won during the execution. The execution depicted is one in\nwhich processes proceed sequentially (the upper process first executes to completion, then\nthe lower process executes). The two processes reached output ports 1 and 2, even though\nthey started at arbitrary input ports.\nmoves “down,” i.e. follows output port y0. Each process continues until it reaches\nan output port b`. The process returns the index `of the output port b`as its output\nvalue. See Figure 9 for a simple illustration of a renaming network execution.\nTest-and-set. In this section, the test-and-set objects used as comparators are\nimplemented using the algorithm of Tromp and Vitányi [55]; in Section 6, we\nwill assume hardware implementations of test-and-set. This distinction is only\nimportant when computing the complexity of the construction, and does not a \u000bect\nits correctness.\n5.1.2 Renaming Network Analysis\nIn the following, we show that the renaming network construction solves adaptive\nstrong renaming, i.e. that processes return values between 1 and k, the total con-\ntention in the execution, as long as the size of the initial namespace is bounded by\nM.\nTheorem 1 (Renaming Network Construction) .Whenever starting from a cor-\nrect sorting network, the renaming network construction solves strong adaptive\nrenaming, with the same progress property as the test-and-set objects used. If the\nsorting network has depth d (defined below), then each process will perform O (d)\ntest-and-set operations before returning from the renaming network.\nProof. First, we prove that the renaming network is well-formed , i.e. that no two\nprocesses may access the same port of a two-process test-and-set object. No two\nprocesses may access the same port of a two-process test-and-set object.\nProof. Recall that each renaming network is obtained from a sorting network.\nTherefore, for any renaming network, we can maintain the standard definitions of\nnetwork and wire depth as for a sorting network [34]. In particular, the depth of a\nwire is defined as follows. An input wire has depth 0. A test-and-set that has two\ninput wires with depths dxanddywill have depth max( dx;dy)+1. A wire in the\nnetwork has depth equal to the depth of the test-and-set from which it originates.\nBecause there can be no cycles of test-and-sets in a renaming network, this notion\nis well-defined. The depth of a network is the maximum depth of an output wire.\nThe claim is equivalent to proving that no two processes may occupy the same\nwire in an execution of the network. We prove this by induction on the depth\nof the current wire. The base case, when the depth is 0, i.e. we are examining\nan input wire, follows from the initial assumption that the initial values viof the\nprocesses are unique, hence no two processes may join the same input port.\nAssume that the claim holds for all wires of depth d\u00150. We prove that\nit holds for any wire of depth d+1. Notice that the depth of a wire may only\nincrease when passing through a two-process test-and-set object. Consider an\narbitrary two-process test-and-set object, with two wires of depth at most das\ninputs, and two wires of depth d+1 as outputs. By the induction hypothesis,\nthe test-and-set is well formed in all executions, since there may be at most two\nprocesses accessing it in any execution. By the specification of test-and-set, it\nfollows that, in any execution, there can be at most one process returning 0 from\nthe object, and at most one process returning 1 from the object. Therefore, there\ncan be at most one process on either output wire, and the induction step holds.\nThis completes the proof of this claim. \u0003\nTermination follows since the base sorting network has finite depth and, by\ndefinition, contains no cycles. Therefore, the renaming network has the same\ntermination guarantees as the two-process test-and-set algorithm we use. In par-\nticular, if we use the two-process test-and-set implementation of [55], the network\nguarantees termination with probability 1. We prove name uniqueness and names-\npace tightness by ensuring the following claim.\nThe renaming network construction ensures that no two processes return the\nsame output, and that the processes return values between 1 and k, the total con-\ntention in the execution. The proof is based on a simulation argument from an\nexecution of a renaming network to an execution of a sorting network. We start\nfrom an arbitrary execution Eof the renaming network, and we build a valid ex-\necution of a sorting network. The structure of the outputs in the sorting network\nexecution will imply that the tightness and uniqueness properties hold in the re-\nnaming network execution.\nLetPbe the set of processes that have taken at least one step in E. Each process\npi2Pis assigned a unique input port viin the renaming network. Let Idenote the\nset of input ports on which there is a process present. We then introduce a new set\nof “ghost\" processes G, each assigned to one of the input ports in f1;2;:::; MgnI.\nWe denote by Cthe set of “crashed\" processes, i.e. processes that took a step in\nE, but did not return an output port index.\nThe next step in the transformation is to assign input values to these processes.\nWe assign input value 0 to processes in P(and correspondingly to their input\nports), and input value 1 to processes in G.\nNote that, in execution E, not all test-and-set objects in the renaming network\nmay have been accessed by processes (e.g., the test-and-set objects corresponding\nto processes in G), and not all processes have reached an output port (i.e., crashed\nprocesses and ghost processes). The next step is to simulate the output of these\ntest-and-set operations by extending the current renaming network execution.\nWe extend the execution by executing each process in C[Guntil completion.\nWe first execute each process in C, in a fixed arbitrary order, and then execute\neach process in G, in a fixed arbitrary order. The rules for deciding the result of\ntest-and-set objects for these processes are the following.\n\u000fIf the current test-and-set Talready has a winner in the extension of E, i.e.\na process that returned 0 and went “up\", then the current process automati-\ncally goes “down\" at this test-and-set.\n\u000fOtherwise, if the winner has not yet been decided in the extension of E,\nthen the current process becomes the winner of Tand goes “up,\" i.e. takes\noutput port x0.\nIn this way, we obtain an execution in which Mprocesses participate, and each\ntest-and-set object has a winner and a loser. By Claim 5.1.2, the execution is well-\nformed, i.e. there are never two processes (or two values) on the same wire. Also\nnote that the resulting extension of the original execution Eis a valid execution\nof a renaming network, since we are assuming an asynchronous shared memory\nmodel, and the ghost and crashed processes can be seen simply as processes that\nare delayed until processes in PnCreturned.\nThe key observation is that, for every two-process test-and-set Tin the net-\nwork, Tobeys the comparison property of comparators in a sorting network, ap-\nplied to the values assigned to the participating processes. We take cases on the\nprocesses pandqparticipating in T.\n1. If pandqare both in P, then both have associated value 0, so the Trespects\nthe comparison property irrespective of the winner.\n2. If p2Pandq2G, then notice that pnecessarily wins T, while qneces-\nsarily loses T. This is trivial if p2PnC; ifp2C, this property is ensured\nsince we execute all processes in C before processes in Gwhen extendingE.\nTherefore, the process with associated value 0 always wins the test-and-set.\n3. If pandqare both in G, then both have associated value 1, so Trespects\nthe comparison property irrespective of the winner.\nThe final step in this transformation is to replace every test-and-set operation\nwith a comparator between the binary values corresponding to the two processes\nthat participate in the test-and-set. Thus, since we have started from a sorting net-\nwork, we obtain a sequence of comparator operations ordered in stages, in which\neach stage contains only comparison operations that may be performed in paral-\nlel. The above argument shows that all comparators obey the comparison property\napplied to the values we assigned to the corresponding processes. In particular,\nwhen input values are di \u000berent, the lower value (corresponding to participating\nprocesses) always goes “up,\" while the higher value always goes “down.\"\nThus, the execution resulting from the last transformation step is in fact a\nvalid execution of the sorting network from which the renaming network has been\nobtained. Recall that we have associated each process that took a step to a 0\ninput value, and each ghost process to a 1 input value to the network. Since,\nby Claim 5.1.2, no two input values may be sorted to the same output port, we\nfirst obtain that the output port indices that the processes in Preturn are unique.\nFor namespace tightness, recall that we have obtained an execution of a sorting\nnetwork with Minput values, M\u0000kof which, i.e. those corresponding to processes\ninG, are 1. By the sorting property of the network, it follows that the lower M\u0000k\noutput ports of the sorting network are occupied by 1 values. Therefore, the M\u0000k\n“ghost\" processes that have not taken a step in Emust be associated with the lower\nM\u0000koutput ports of the network in the extended execution. Conversely, processes\ninPmust be associated with an output port between 1 and kin the extension of the\noriginal execution E. The final step is to notice that, in E, we have not modified the\noutput port assignment for processes in PnC, i.e. for the processes that returned\na value in the execution E. Therefore, these processes must have returned a value\nbetween 1 and k. This concludes the proof of this claim and of the Theorem. \u0003\nWe now apply the renaming network construction starting from sorting net-\nworks of optimal logarithmic depth, whose existence is ensured by the AKS con-\nstruction [4]. (Recall that the AKS construction [4] gives, for any integer N>0,\na network for sorting Nintegers, whose depth is O(logN). The construction is\nquite complex, and therefore we do not present it here.)\n[AKS] The renaming network obtained from an AKS sorting network [4] with\nMinput ports solves the strong adaptive renaming problem with Minitial names,\nguaranteeing name uniqueness in all executions, and using O(logM) test-and-set\noperations per process in the worst case. The termination guarantee is the same as\nthat of the test-and-set objects used.\nProof. The fact that this instance of the algorithm solves strong adaptive renaming\nfollows from Theorem 1. For the complexity claims, notice that the number of\ntest-and-set objects a process enters is bounded by the depth of the sorting network\nfrom which the renaming network has been obtained. In the case of the AKS\nsorting network with Minputs, the depth is O(logM). \u0003\n5.2 A Strong Adaptive Renaming Algorithm\nWe present an algorithm for adaptive strong renaming based on an adaptive sort-\ning network construction. For any k\u00150, the algorithm guarantees that kprocesses\nobtain unique names from 1 to k. We start by presenting a sorting network con-\nstruction that adapts its size and complexity to the number of processes executing\nit. We will then use this network as a basis for an adaptive renaming algorithm\n5.2.1 An Adaptive Sorting Network\nWe present a recursive construction of a sorting network of arbitrary size. We\nwill guarantee that the resulting construction ensures the properties of a sorting\nnetwork whenever truncated to a finite number of input (and output) ports. The\nsorting network is adaptive, in the sense that any value entering on wire nand\nleaving on wire mtraverses at most O(log max( n;m)) comparators.\nLet the width of a sorting network be the number of input (or output) ports in\nthe network. The basic observation is that we can extend a small sorting network B\nto a wider range by inserting it between two much larger sorting networks Aand\nC. The resulting network is non-uniform—di \u000berent paths through the network\nhave di \u000berent lengths, with the lowest part of the sorting network (in terms of\nport numbers) having the same depth as B, whereas paths starting at higher port\nnumbers may have higher depth.\nFormally, suppose we have sorting networks A,B, and C, where AandChave\nwidth mandBhas width k<m. Label the inputs of AasA1;A2;:::; Amand the\noutputs as A0\n1;A0\n2;:::; A0\nm, where i<jmeans that A0\nireceives a value less than or\nequal to A0\nj. Similarly label the inputs and outputs of BandC. Fix`\u0014k=2 and\nconstruct a new sorting network ABC with inputs B1;B2;:::B`;A1;:::Amand out-\nputs B0\n1;B0\n2;:::B0\n`;C0\n1;C0\n2;:::C0\nm. Internally, insert Bbetween AandCby connect-\ning outputs A0\n1;:::; A0\nk\u0000`to inputs B`+1;:::; Bk; and outputs B0\n`+1;:::B0\nkto inputs\nC1;:::Ck\u0000`. The remaining outputs of Aare wired directly across to the corre-\n1m\nlk\n1m m\n1m\n1 1lk\n222\n2 2k-l\n.\n.\n.\n12k-l\n.\n.\n.A\nBCFigure 10: One stage in the construction of the adaptive sorting network. The small labels\nindicate port number: upper is higher.\nsponding inputs of C: outputs A0\nk\u0000`+1;:::; A0\nmare wired to inputs Ck\u0000`+1;:::; Cm.\n(See Figure 10.)\nLemma 1. The network ABC constructed as described above is a sorting network.\nProof. The proof uses the well-known Zero-One Principle [34]: we show that the\nnetwork correctly sorts all input sequence of zeros and ones, and deduce from this\nfact that it correctly sorts all input sequences.\nGiven a particular 0-1 input sequence, let zBandzAbe the number of zeros in\nthe input that are sent to inputs B1:::B`andA1:::Am. Because Asorts all of its\nincoming zeros to its lowest outputs, Bgets a total of zB+max( k\u0000`;zA) zeros\non it inputs, and sorts those zeros to outputs B0\n1:::B0\nzB+max( k\u0000`;zA). An additional\nzA\u0000max( k\u0000`;zA) zeros propagate directly from AtoC.\nWe consider two cases, depending on the value of the max:\n\u000fCase 1: zA\u0014k\u0000`. Then BgetszB+zAzeros (all of them), sorts them to its\nlowest outputs, and those that reach outputs B0\n`+1and above are not moved\nbyC. Therefore, the sorting network is correct in this case.\n\u000fCase 2: zA>k\u0000`. Then B gets zB+k\u0000`zeros, while zA\u0000(k\u0000`) zeros are\npropagated directly from A to C. Because `\u0014k=2,zB+k\u0000`\u0015k=2\u0015`, and\nB sends`zeros out its direct outputs B0\n1:::B`. All remaining zeros are fed\nintoC, which sorts them to the next zA+zB\u0000`positions. Again, the sorting\nnetwork is correct.\n\u0003\nWhen building the adaptive network, it will be useful to constrain which parts\nof the network particular values traverse. The key tool is given by the following\nlemma:\nLemma 2. If a value v is supplied to one of the inputs B 1through B `in the network\nABC, and is one of the `smallest values supplied on all inputs, then v never leaves\nB.\nProof. Immediate from the construction and Lemma 1; vdoes not enter Ainitially,\nand is sorted to one of the output B0\n1:::B0\n`, meaning that it also avoids C. \u0003\nNow let us show how to recursively construct a large sorting network with\nMdepth when truncated to the first Mpositions. We assume that we are using\na construction of a sorting network that requires at most alogcndepth to sort\nnvalues, where aandcare constants. For the AKS sorting network [4], we\nhave c=1 and very large a; for constructible networks (e.g., the bitonic sorting\nnetwork [45]), we have c=2 and small a.\nStart with a sorting network S0of width 2. In general, we will let wjbe the\nwidth of Sj; so we have w0=2. We also write djfor the depth of Sj(the number\nof comparators on the longest path through the network).\nGiven Sj, construct Sj+1by appending two sorting networks Aj+1andCj+1\nwith width w2\nj\u0000wj=2, and attach them to the top half of Sjas in Lemma 1, setting\n`=wj=2.\nObserve that wj+1=w2\njanddj+1=2alogc(w2\nj\u0000wj=2)+dj\u00144alogcwj+dj.\nSolving these recurrences gives wj=22janddj=Pj\ni=02c(i+2)a=O(2c j).\nIf we set M=22j, then j=lg lg M, and dj=O(2clg lg M)=O(logcM). This\ngives us polylogarithmic depth for a network with Mlines, and a total number of\ncomparators of O(MlogcM).\nWe can in fact state a stronger result, relating the input and output port indices\nfor a value with the complexity of sorting that value:\nTheorem 2. For any j\u00150, the network S jconstructed above is a sorting network,\nwith the property that any value that enters on the n-th input and leaves on the m-\nth output traverses O (logcmax( n;m))comparators.\nProof. That Sjis a sorting network follows from induction on jusing Lemma 1.\nFor the second property, let Sj0be the smallest stage in the construction of Sj\nto which input nand output mare directly connected. Then wj0\u00001=2<max( n;m)\u0014\nwj0=2, which we can rewrite as 22j0\u00001<2 max( n;m)\u001422j0orj0\u00001<lg lg max( n;m)\u0014\nj0, implying j0=lg lg max( n;m). By Lemma 2, the given value stays in Sj0,\nmeaning it traverses at most dj0=O\u0010\n2c j0\u0011\n=O\u0010\n2clg lg max( n;m)\u0011\n=O\u0000lgcmax( n;m)\u0001\ncomparators. \u0003\n5.2.2 Transformation to a Renaming Nework\nWe now apply the previous results to renaming networks.\nConsider the sequence of networks Rjresulting from replacing comparators\nwith two-process test-and-set objects in the extensible sorting network construc-\ntion from Section 5.2.1. For any M\u0015k>0, assuming initial names from 1 to M,\nthese networks solve strong renaming for kprocesses with O(logM) test-and-set\naccesses per process.\nProof. Fix a M\u0015k>0, and let jbe the first index in the sequence such that\nthe resulting network Sjhas at least Minputs and Moutputs. By Theorem 2, this\nnetwork sorts, and has depth O(logM) (considering the version of the construction\nusing the AKS sorting network as a basis). By Theorem 1, the corresponding\nrenaming network Rjsolves adaptive strong renaming for any kprocesses with\ninitial names between 1 and M, performing O(logM) test-and-set accesses per\nprocess. \u0003\n5.2.3 An Algorithm for Strong Adaptive Renaming\nWe show how to apply the adaptive sorting network construction to solve strong\nadaptive renaming when the size of the initial namespace, M, is unknown, and\nmay be unbounded. This procedure can also be seen as transforming an arbitrary\nrenaming algorithm A, guaranteeing a namespace of size M, into strong renaming\nalgorithm S(A), ensuring a namespace from 1 to k. In case the processes have\ninitial names from 1 to M, then Ais a trivial algorithm that takes no steps. We\nfirst describe this general transformation, and then consider a particular case to\nobtain a strong adaptive renaming algorithm with logarithmic time complexity.\nNotice that, in order to work for unbounded contention k, the algorithm may use\nunbounded space, since the adaptive renaming network construction continues to\ngrow as more and more processes access it.\nDescription. We assume a renaming algorithm Awith complexity C(A), guar-\nanteeing a namespace of size M(which may be a function of k, orn). We assume\nthat processes share an instance of algorithm Aand an adaptive renaming network\nR, obtained using the procedure in Section 5.2.1.\nThe transformation is composed of two stages. In the first stage, each process\npiexecutes the algorithm Aand obtains a temporary name vifrom 1 to M. In the\nsecond stage, each process uses the temporary name vias the index of its (unique)\ninput port to the renaming network R. The process then executes the renaming\nnetwork Rstarting at the given input port, and returns the index of its output port\nas its name.\nWait-freedom. Notice that, technically, this algorithm may not be wait-free if\nthe number of processes kparticipating in an execution is infinite , then it is pos-\nsible that a process either fails to acquire a temporary name during the first stage,\nor it continually fails to reach an output port by always losing the test-and-set ob-\njects it participates in. Therefore, in the following, we assume that kis finite, and\npresent bounds on step complexity that depend on k.\nConstructibility. Recall that we are using the AKS sorting network [4] of O(logM)\ndepth for Minputs as the basis for the adaptive renaming network construction.\nHowever, the constants hidden in the asymptotic notation for this construction are\nlarge, and make the construction impractical [45]. On the other hand, since the\nconstruction accepts any sorting network as basis, we can use Batcher’s bitonic\nsorting network [45], with O(log2M) depth as a basis for the construction. Using\nbitonic networks trades a logarithmic factor in terms of step complexity for ease\nof implementation.\n5.2.4 Analysis of the Strong Adaptive Renaming Algorithm\nWe now show that the transformation is correct, transforming any renaming al-\ngorithm Awith namespace Mand complexity C(A) into a strong renaming algo-\nrithm, with complexity cost C(A)+O(logM).\nTheorem 3 (Namespace Boosting) .Given any renaming algorithm A ensuring\nnamespace M with expected worst-case step complexity C (A), the renaming net-\nwork construction yields an algorithm S (A)ensuring strong renaming. The num-\nber of test-and-set operations that a process performs in the renaming network is\nO(logM). Moreover, if A is adaptive , then the algorithm S (A)is also adaptive.\nWhen using the randomized test-and-set construction of [55], the number of steps\nthat a process takes in the renaming network is O (logM)both in expectation and\nwith high probability in k.\nProof. Fix an algorithm Awith namespace Mand worst-case step complexity\nC(A). Therefore, we can assume that, during the current execution, each process\nenters a unique input port between 1 and Min the adaptive renaming network. By\nCorollary 5.2.2, each process reaches a unique output port between 1 and k, which\nensures that the transformation solves strong renaming.\nIf the algorithm Ais adaptive, i.e. the namespace size Mand its complexity\nC(A) depend only on k, then the entire construction is adaptive, since the adaptive\nrenaming network guarantees a namespace size of k, and complexity O(logM),\nwhich only depends on k. This concludes the proof of correctness.\nFor the upper bound on worst-case step complexity, notice that a process may\ntake at most C(A) steps while running the first stage of the algorithm. By Corol-\nlary 5.2.2, we obtain that a process performs O(logM) test-and-set accesses in\nany execution. Since the randomized test-and-set construction of [55], has con-\nstant expected step complexity, the worst-case expected step complexity of the\nwhole construction is C(A)+O(logM).\nTo obtain the high probability bound on the number of read-write operations\nperformed by a process in the renaming network, first recall that the number of\ntest-and-set operations that a process may perform while executing the renaming\nnetwork is \u0002(logM). Therefore, we can see the number of read-write steps that\na process takes while executing the renaming network as a sum of \u0002(logM) geo-\nmetrically distributed random variables, one for each two-process test-and-set. It\nfollows that the number of steps that a process performs while executing the re-\nnaming network is O(logM) with high probability in M. Since M\u0015k, this bound\nalso holds with high probability in k. \u0003\nWe now substitute the generic algorithm Awith the RatRace loose renaming\nalgorithm of [10], whose structure and properties are given in the Appendix. We\nobtain a strong renaming algorithm with logarithmic step complexity. First, the\nproperties of the RatRace renaming algorithm are as follows.\n[RatRace Renaming] For c\u00153 constant, the RatRace renaming algorithm de-\nscribed above yields an adaptive renaming algorithm ensuring a namespace of size\nO(kc) inO(logk) steps, both with high probability in k. Every process eventually\nreturns with probability 1.\nThis implies the following.\nThere exists an algorithm Tsuch that, for any finite k\u00151,Tsolves strong\nadaptive renaming with worst-case step complexity O(logk). The upper bound\nholds in expectation and with high probability in k.\nProof. We replace the algorithm Ain Theorem 3 with RatRace renaming. We\nobtain a correct adaptive strong renaming algorithm.\nFor the upper bounds on complexity, by Proposition 5.2.4, the RatRace re-\nnaming algorithm ensures a namespace of size O(kc) using O(logk) steps, with\nprobability at least 1 \u00001=kc, for some constant c\u00153. The complexity of the\nresulting strong renaming algorithm is at most the complexity of RatRace renam-\ning plus the complexity of executing the renaming network. By Theorem 3, with\nprobability at least 1 \u00001=kc, this is at most\nO(logk)+O(logkc)=O(logk):\nThe expected step complexity upper bound follows identically. Finally, since\nRatRace is adaptive, the transformation also yields an adaptive renaming algo-\nrithm. \u0003\nWe also obtain the following corollary, which applies to the case when test-\nand-set is available as a base object.\nGiven any renaming algorithm Aensuring namespace Mwith worst-case step\ncomplexity C(A), and assuming test-and-set base objects with constant cost, the\nrenaming network construction yields an algorithm S(A) ensuring strong renam-\ning with worst-case step complexity C(A)+O(logM). Moreover, if Aisadaptive ,\nthen the algorithm S(A) is also adaptive.\n6 From an Optimal Randomized Algorithm to a Tight\nDeterministic Lower Bound\nIn this section, we prove a linear lower bound on the time complexity of deter-\nministic renaming in asynchronous shared memory. The lower bound holds for\nalgorithms using reads, writes, test-and-set, and compare-and-swap operations,\nand is matched within constants by existing algorithms, as discussed in Section 3.\nWe first prove the lower bound for adaptive deterministic renaming, and then ex-\ntend it to non-adaptive renaming by reduction. The lower bound will hold for\nalgorithms that either rename into a sub-exponential namespace in k(if the algo-\nrithm is adaptive) or into a polynomial namespace in n(if the algorithm is not\nadaptive).\nThe Strategy. We obtain the result by reduction from a lower bound on mutual\nexclusion. The argument can be split in two steps, outlined in Figure 11. The first\nstep assumes a wait-free algorithm R, renaming adaptively into a loose namespace\nof sub-exponential size M(k), and obtains an algorithm T(R) for strong adaptive\nrenaming. As shown in Section 5, the extra complexity cost of this step is an\nadditive factor of O(logM(k)).6\nThe second step uses the strong renaming algorithm T(R) to solve adaptive\nmutual exclusion , with the property that the RMR complexity of the resulting\nadaptive mutual exclusion algorithm ME(T(R)) isO(C(k)+logM(k)), where C(k)\nis the step complexity of the initial algorithm R. Finally, we employ an \n(k)\nlower bound on the RMR complexity of adaptive mutual exclusion by Anderson\nand Kim [44]. When plugging in any sub-exponential function for M(k) in the\nexpression bounding the RMR complexity of the adaptive mutual exclusion al-\ngorithm ME(T(R)), we obtain that the algorithm Rmust have step complexity at\nleast linear in k.\n6Since we are assuming a system with atomic test-and-set and compare-and-swap operations,\nwe can use such operations with unit cost in the construction from Section 5.\nRenaming Algorithm R Namespace M(k)Complexity C(k) Strong Renaming Algorithm T( R ) Namespace kComplexity O( C(k) + log (M(k)) ) Adaptive Mutex Algorithm ME(T(R)) RMR Complexity O( C(k) + log (M(k)) ) Claim 3Claim 4Figure 11: The structure of the reduction in Theorem 4.\nApplications. This result also implies a linear lower bound on the time com-\nplexity of non-adaptive renaming algorithms, which guarantee names from 1 to\nsome polynomial function in n, with nknown. This generalization holds by re-\nduction, and is proven in full in [6].\nA second application follows from the observation that many common shared-\nmemory objects such as queues, stacks, and fetch-and-increment registers can be\nused to solve adaptive strong renaming. In turn, this will imply that the linear\nlower bound will also apply to deterministic shared-memory implementations of\nthese objects using read, write, compare-and-swap or test-and-set operations.\n6.1 Adaptive Lower Bound\nIn this section, we prove the following result.\nTheorem 4 (Individual Time Lower Bound) .For any k\u00151, given n = \n(k2k), any\nwait-free deterministic adaptive renaming algorithm that renames into a names-\npace of size at most 2f(k)for any function f (k)=o(k)has a worst-case execution\nwith 2k\u00001participants in which (1) some process performs \n(k)RMRs (and \n(k)\nsteps) and (2) each participating process performs a single rename operation.\nProof. We begin by assuming for contradiction that there exists a deterministic\nadaptive algorithm Rthat renames into a namespace of size M(k)=2f(k)for\nf(k)2o(k), with step complexity C(k)=o(k). The first step in the proof is to show\nthat any such algorithm can be transformed into a wait-free algorithm that solves\nadaptive strong renaming in the same model, augmented with test-and-set base\nobjects; the complexity cost of the resulting algorithm will be O(C(k)+logM(k)).\nThis result follows immediately from Corollary 5.2.4.\nAssuming test-and-set as a base object, any wait-free algorithm Rthat renames\ninto a namespace of size M(k) with complexity C(k) can be transformed into a\nstrong adaptive renaming algorithm T(R) with complexity O(C(k)+logM(k)).\nReturning to the main proof, in the context of assumed algorithm R, the claim\nguarantees that the resulting algorithm T(R) solves strong adaptive renaming with\ncomplexity o(k)+O(log 2f(k))=o(k)+O(f(k))=o(k):\nThe second step in the proof shows that any wait-free strong adaptive renaming\nalgorithm can be used to solve adaptive mutual exclusion with only a constant\nincrease in terms of step complexity. We note that the mutual exclusion algorithm\nobtained is single-use (i.e., each process executes it exactly once).\nAny deterministic algorithm Rfor adaptive strong renaming implies a correct\nadaptive mutual exclusion algorithm ME(R). The RMR complexity of ME(R)\nis upper bounded asymptotically by the RMR complexity of R, which is in turn\nupper bounded by its step complexity.\nProof. We begin by noting a few key distinctions between renaming and mutual\nexclusion. Renaming algorithms are usually wait-free, and assume a read-write\nshared-memory model which may be augmented with atomic compare-and-swap\nor test-and-set operations; complexity is measured in the number of steps that a\nprocess takes during the execution. For simplicity, in the following, we abuse\nnotation and call this the wait-free (WF) model. Mutual exclusion assumes a\nmore specific cache-coherent (CC) or distributed shared memory (DSM) shared-\nmemory model with no process failures (otherwise, a process crashing in the criti-\ncal section would block the processes in the entry section forever). Thus, solutions\nto mutual exclusion are inherently blocking; the complexity of mutex algorithms\nis measured in terms of remote memory references (RMRs). We call this second\nmodel the failure-free, local spinning model, in short LS.\nThe transformation from adaptive tight renaming algorithm Rin WF to the\nmutex algorithm ME(R) in LS uses the algorithm Rto solve mutual exclusion.\nThe key idea is to use the names obtained by processes as tickets to enter the\ncritical section.\nProcesses share a copy of the algorithm R, and a right-infinite array of shared\nbitsDone [1;2;:::], initially false. For the enter procedure of the mutex imple-\nmentation, each of the kparticipating processes runs algorithm R, and obtains a\nunique name from 1 to k. Since the algorithm Ris wait-free, it can be run in the\nLS model with no modifications.\nThe process that obtained name 1 enters the critical section; upon leaving, it\nsets the Done [1] bit to true. Any process that obtains a name id\u00152 from the\nadaptive renaming object spins on the Done [id\u00001] bit associated to name id\u00001,\nuntil the bit is set to true. When this occurs, the process enters the critical section.\nWhen calling the exit procedure to release the critical section, each process sets\ntheDone [id] bit associated with its name to true and returns. This construction is\ndesigned for the CC model.\nWe now show that this construction is a correct mutex implementation.\n\u000fFor the mutual exclusion property, let qibe the process that obtained name i\nfrom the renaming network, for i2f1;:::; kg. Notice that, by the structure\nof the protocol, for any i2f1;:::; k\u00001g, process qi+1may enter the critical\nsection only after process qihas exited the critical section, since process\nqisets the Done [i] bit to true only after executing the critical section. This\ncreates a natural ordering between processes’ accesses in the critical section,\nwhich ensures that no two processes may enter it concurrently.\n\u000fFor the no deadlock andno lockout properties, first notice that, since the mu-\ntex algorithm runs in a failure-free model, and the test-and-set instances we\nuse in the renaming network are deterministically wait-free, it follows that\nevery process will eventually reach an output port in the renaming network.\nThus, by Theorem 3, each process will eventually be assigned a name from\n1 tok. Conversely, each name ifrom 1 to kwill eventually get assigned to\na unique process qi. Therefore, each of the Done [ ] bits corresponding to\nnames 1;:::; kwill be eventually set to true, which implies that eventually\neach process enters the critical section, as required.\n\u000fTheunobstructed exit condition holds since each process performs a single\noperation in the exit section.\nFor the complexity claims, notice that, once a process obtains the name from algo-\nrithm R, it performs at most two extra RMRs before entering the critical section,\nsince RMRs may be charged only when first reading the Done [v\u00001] register,\nand when the value of this register is set to true. Therefore, the (individual or\nglobal) RMR complexity of the mutex algorithm is the same (modulo constant\nmultiplicative factors) as the RMR complexity of the original algorithm R. Since\nthe algorithm Ris wait-free, its RMR complexity is a lower bound on its step\ncomplexity.\nThe last remaining claim is that the resulting renaming algorithm is adaptive ,\ni.e. its complexity only depends on the contention kin the execution, and the\nalgorithm works for any value of the parameter n. This follows since the original\nalgorithm Rwas adaptive, and by the structure of the transformation. In fact, the\ntransformation does not require an upper bound on nto be known; if such an upper\nbound is provided, then it can be used to bound the size of the Done [ ] array. This\nconcludes the proof of the claim. \u0003\nFinal argument. To conclude the proof of Theorem 4, notice that the algorithm\nresulting from the composition of the two claims, ME(T(R)), is an adaptive mu-\ntual exclusion algorithm that requires o(k)+O(f(k))=o(k) RMRs to enter and\nexit the critical section, in the cache-coherent model, where 2f(k)is the size of the\nnamespace guaranteed by the renaming algorithm.\nHowever, the existence of this algorithm contradicts the \n(k) lower bound on\nthe RMR complexity of adaptive mutual exclusion by Anderson and Kim [44,\nTheorem 2], stated below.\nTheorem 5 (Mutex Time Lower Bound [44]) .For any k\u00151, given n = \n(k2k),\nanydeterministic mutual exclusion algorithm using reads, writes, and compare-\nand-swap operations that accepts at least n participating processes has a compu-\ntation involving (2k\u00001)participants in which some process performs k remote\nmemory references to enter and exit the critical section [44].\nThe algorithm Ris adaptive and therefore works for unbounded n. Therefore,\nthe adaptive mutual exclusion algorithm ME(T(R)) also works for unbounded\nn. Hence the above mutual exclusion lower bound contradicts the existence of\nalgorithm ME(T(R)). The contradiction arises from our initial assumption on the\nexistence of algorithm R. The claim about step complexity follows since, for wait-\nfree algorithms, the RMR complexity is always a lower bound on step complexity.\nThe claim about the number of rename operations follows from the structure of\nthe transformation and from that of the mutual exclusion lower bound of [44], in\nwhich each process performs the entry section once. \u0003\n6.1.1 Technical Notes\nRelation between kand n.The lower bound of Anderson and Kim [44] from\nwhich we obtain our result assumes large values of n, the maximum possible num-\nber of participating processes, in the order of k2k. Therefore, for a fixed n, the rel-\native value of kfor which the linear lower bound is obtained may be very small.\nFor example, the lower bound does not preclude an algorithm with running time\nO(min( k;logn)) ifnis known in advance.\nRead-write algorithms. Notice that, although the first reduction step employs\ncompare-and-swap (or test-and-set) operations for building the renaming network,\nthe lower bound also holds for algorithms that only employ read or write opera-\ntions, since the renaming network is independent from the original renaming al-\ngorithm R.\nSingle-Use Mutex. As noted above, the mutual exclusion algorithm we ob-\ntained is single-use . This is not a problem for the lower bound, since it holds\nfor executions where each process invokes the entry section once, however it lim-\nits the usefulness of the algorithm. We note that the algorithm can be extended\nto a variant where processes invoke the critical section several times, however in\nthis case the time complexity will be logarithmic in the total number of mutual\nexclusion calls in the execution.\nProgress conditions. Known adaptive renaming algorithms, e.g. [52], [7], do\nnot guarantee wait-freedom in executions where the number of participants is un-\nbounded, since a process may be prevented indefinitely from acquiring a name by\nnew incoming processes. Note that our lower bound applies to these algorithms\nas well, as the original mutual exclusion lower bound of Anderson and Kim [44]\napplies to all mutex algorithms ensuring livelock-freedom, and our transformation\ndoes not require a strengthening of this progress condition.\n6.2 Applications\n6.2.1 Non-Adaptive Renaming\nThe above argument can be extended to apply to non-adaptive renaming algo-\nrithms as well, as long as they start with names from a namespace of unbounded\nsize, which matches the problem definition we considered. The argument is tech-\nnical, and requires the definition of an auxiliary task called renaming with fails ,\nwhich allows for the possibility of failure when acquiring a name. We refer the\nreader to [6] for the complete argument, and simply state the claim here.\nAny deterministic non-adaptive renaming algorithm, with the property that for\nanyn\u00151 the algorithm ensures a namespace polynomial in n, has worst-case step\ncomplexity \n(n).\n6.2.2 Lower Bounds for Other Objects\nThese results imply time lower bounds for implementations of other shared ob-\njects, such as fetch-and-increment registers, queues, and stacks. Some of these\nresults are new, while others improve on previously known results.\nWe first show reductions between fetch-and-increment, queues, and stacks, on\nthe one hand, and adaptive strong renaming, on the other hand.\nLemma 3. For any k>0, we can solve adaptive strong renaming using a fetch-\nand-increment register, a queue, or a counter.\nProof. Given a linearizable fetch-and-increment register, we can solve adaptive\nstrong renaming by having each participant call the fetch-and-increment operation\nonce, and return the value received plus 1. The renaming properties are follow\ntrivially from the sequential specification of fetch-and-increment.\nGiven a linearizable shared queue, we can solve renaming as follows. If\nan upper bound on nis given, then we initialize the queue with distinct inte-\ngers 1;2;:::; n; otherwise, we initialize it with an unbounded string of integers\n1;2;3;:::. In both cases, 1 is the element at the head of the queue. Given this\ninitialized object, we can solve adaptive strong renaming by having each partici-\npant call the dequeue operation once, and return the value received. Correctness\nfollows trivially from the sequential specification of the queue.\nFinally, given a stack, we initialize it with the same string of integers, where 1\nis the top of the stack. To solve renaming, each process performs pop on the stack\nand returns the element received. \u0003\nThis implies a linear time lower bound for these objects.\n[Queues, Stacks, Fetch-and-Increment] Consider a wait-free linearizable im-\nplementation Aof a fetch-and-increment register, queue, or stack, in shared mem-\nory with read, write, test-and-set, and compare-and-swap operations. If the algo-\nrithm Ais deterministic, then, for any k\u00151, given n= \n(k2k), there exists an\nexecution of Awith 2 k\u00001 participants in which (1) each participant performs a\nsingle call to the object, and (2) some process performs kRMRs (or steps).\n7 Discussion and Open Questions\nWe have surveyed tight bounds on the complexity of the renaming problem in\nasynchronous shared-memory, both for deterministic and randomized algorithms.\nIn particular, we have seen that, using randomization, we can achieve a tight\nnamespace in logarithmic expected time, and that deterministic implementations\nof renaming have linear time complexity as long as they ensure a polynomial-size\nnamespace.\nSeveral open questions remain. In shared-memory, the deterministic lower\nbound is matched by several algorithms in the literature. For algorithms using\nonly reads and writes, which have been studied more extensively, the algorithm of\nChlebus and Kowalski [33] matches the linear time lower bound, giving a names-\npace of size (8 k\u0000logk\u00001); an elegant algorithm by Attiya and Fouren [20]\nachieves a tighter namespace of size (6 k\u00001); however, this last algorithm only\nmatches the time lower bound within a logarithmic factor. The fastest known\nalgorithm to achieve an optimal namespace using only reads and writes (of size\n(2k\u00001)) was given by Afek et al. [3], with time complexity O(k2). We have thus\nreached our first open question.\nQ1. What are the trade-o \u000bs between time complexity and namespace size\nfor deterministic asynchronous renaming?\nOne disadvantage of the renaming network algorithm is that it is based on an\nAKS sorting network [4], which has prohibitively high constants hidden inside\nthe asymptotic notation [45]. Thus, it would be interesting to see whether one can\nobtain constructible randomized solutions that are time-optimal and namespace-\noptimal. On the other hand, the total lower bound holds only for adaptive algo-\nrithms; it is not known whether faster non-adaptive algorithms exist, which could\nin theory go below the logarithmic threshold. We conjecture that \n(logn) steps is\na lower bound for non-adaptive randomized algorithms as well.\nQ2. What are the tight bounds on the time complexity of randomized\nnon-adaptive renaming?\nOne aspect of these concurrent data structures, which has been somewhat ne-\nglected by research is space complexity, i.e. the number of registers necessary for\ncorrect shared-memory implementations. Recent work [35, 40] has begun looking\ninto this area as well. The question of tight bounds for renaming parametrized by\nnamespace size is still open however, and should yield interesting new insights on\nthis problem.\nQ3. What are the space-time-namespace trade-o \u000bs for renaming?\nOur lower bounds apply to implementations of more complex objects, such\nas queues, stacks, or fetch-and-increment counters. The total step lower bound\nsuggests that there are complexity thresholds which cannot be avoided even with\nthe use of randomization. In particular, the average step complexity for adaptive\nversions of these data structures is logarithmic , even when using randomization.\nHowever, for many such objects there do not exist algorithms that match this loga-\nrithmic lower bound. In terms of circumventing this bound, recent results [5], [26]\nsuggest that weaker adversarial models and relaxing object semantics, e.g. al-\nlowing approximate implementations, could be used to go below this logarithmic\nthreshold.\nQ4. Give tight bounds for asynchronous queues, stacks, and counters.\nAn area where open questions are still abundant is that of message-passing\nimplementations. In particular, the round complexity of renaming is open in both\nsynchronous and asynchronous models, and known techniques do not appear to\napply in this setting. Recent work [13] has given tight quadratic bounds for the\nmessage complexity of renaming in the classic asynchronous model, but the ques-\ntion of time (round) complexity of renaming in this model is still open.\nQ5. What is the time complexity of renaming in asynchronous\nmessage-passing?\nReferences\n[1] Yehuda Afek, Hagit Attiya, Arie Fouren, Gideon Stupp, and Dan Touitou.\nLong-lived renaming made adaptive. In Proc. 18th Annual ACM Sympo-\nsium on Principles of Distributed Computing (PODC) , pages 91–103. ACM,\n1999.\n[2] Yehuda Afek, Eli Gafni, John Tromp, and Paul M. B. Vitányi. Wait-free\ntest-and-set (extended abstract). In Proc. 6th International Workshop on\nDistributed Algorithms (WDAG) , pages 85–94. Springer-Verlag, 1992.\n[3] Yehuda Afek and Michael Merritt. Fast, wait-free (2k-1)-renaming. In\nProc. 18th Annual ACM Symposium on Principles of Distributed Comput-\ning (PODC) , pages 105–112. ACM, 1999.\n[4] Miklos Ajtai, Janos Komlós, and Endre Szemerédi. An O(nlogn) sorting\nnetwork. In Proc. 15th Annual ACM Symposium on Theory of Computing\n(STOC) , pages 1–9. ACM, 1983.\n[5] Dan Alistarh and James Aspnes. Sub-logarithmic test-and-set against a weak\nadversary. In Proc. 25th International Conference on Distributed Computing\n(DISC) , pages 97–109, 2011.\n[6] Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Rachid\nGuerraoui. Tight bounds for asynchronous renaming. J. ACM , 61(3):18:1–\n18:51, June 2014.\n[7] Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Morteza\nZadimoghaddam. Optimal-time adaptive strong renaming, with applications\nto counting. In Proc. 30th Annual ACM Symposium on Principles of Dis-\ntributed Computing (PODC) , pages 239–248, 2011.\n[8] Dan Alistarh, James Aspnes, George Giakkoupis, and Philipp Woelfel. Ran-\ndomized loose renaming in o(log log n) time. In Proceedings of the 2013\nACM Symposium on Principles of Distributed Computing , PODC ’13, pages\n200–209, New York, NY , USA, 2013. ACM.\n[9] Dan Alistarh, James Aspnes, Seth Gilbert, and Rachid Guerraoui. The com-\nplexity of renaming. In Proc. 52nd IEEE Symposium on Foundations of\nComputer Science (FOCS) , pages 718–727, 2011.\n[10] Dan Alistarh, Hagit Attiya, Seth Gilbert, Andrei Giurgiu, and Rachid Guer-\nraoui. Fast randomized test-and-set and renaming. In Proc. 24th In-\nternational Conference on Distributed Computing (DISC) , pages 94–108.\nSpringer-Verlag, 2010.\n[11] Dan Alistarh, Hagit Attiya, Rachid Guerraoui, and Corentin Travers. Early-\nDeciding Renaming in O( log f ) Rounds or Less. In Proc. 19th Interna-\ntional Colloquium on Structural Information and Communication Complex-\nity (SIROCCO ’12) . Springer-Verlag, 2012.\n[12] Dan Alistarh, Oksana Denysyuk, Luís Rodrigues, and Nir Shavit. Balls-into-\nleaves: Sub-logarithmic renaming in synchronous message-passing systems.\nInProceedings of the 2014 ACM Symposium on Principles of Distributed\nComputing , PODC ’14, pages 232–241, New York, NY , USA, 2014. ACM.\n[13] Dan Alistarh, Rati Gelashvili, and Adrian Vladu. How to elect a leader\nfaster than a tournament. In Proceedings of the 2015 ACM Symposium on\nPrinciples of Distributed Computing , PODC ’15, pages 365–374, New York,\nNY , USA, 2015. ACM.\n[14] James H. Anderson and Mark Moir. Using local-spin k-exclusion algo-\nrithms to improve wait-free object implementations. Distributed Computing ,\n11(1):1–20, 1997.\n[15] James Aspnes, Hagit Attiya, and Keren Censor. Polylogarithmic concurrent\ndata structures from monotone circuits. Journal of the ACM , 59(1):2:1–2:24,\nFebruary 2012.\n[16] James Aspnes, Maurice Herlihy, and Nir Shavit. Counting networks. Journal\nof the ACM , 41(5):1020–1048, September 1994.\n[17] Hagit Attiya, Amotz Bar-Noy, Danny Dolev, David Peleg, and Ruediger\nReischuk. Renaming in an asynchronous environment. Journal of the ACM ,\n37(3):524–548, 1990.\n[18] Hagit Attiya and Vita Bortnikov. Adaptive and e \u000ecient mutual exclusion.\nDistributed Computing , 15(3):177–189, 2002.\n[19] Hagit Attiya and Taly Djerassi-Shintel. Time bounds for decision problems\nin the presence of timing uncertainty and failures. J. Parallel Distrib. Com-\nput., 61(8):1096–1109, 2001.\n[20] Hagit Attiya and Arie Fouren. Adaptive and e \u000ecient algorithms for lattice\nagreement and renaming. SIAM J. Comput. , 31(2):642–664, 2001.\n[21] Hagit Attiya and Danny Hendler. Time and space lower bounds for imple-\nmentations using k-cas. IEEE Trans. Parallel and Distrib. Syst. , 21(2):162\n–173, 2010.\n[22] Hagit Attiya, Maurice Herlihy, and Ophir Rachman. Atomic snapshots using\nlattice agreement. Distributed Computing , 8(3):121–132, March 1995.\n[23] Hagit Attiya, Fabian Kuhn, C. Greg Plaxton, Mirjam Wattenhofer, and Roger\nWattenhofer. E \u000ecient adaptive collect using randomization. Distributed\nComputing , 18(3):179–188, 2006.\n[24] Hagit Attiya and Jennifer Welch. Distributed Computing. Fundamentals,\nSimulations, and Advanced Topics. McGraw-Hill, 1998.\n[25] Amotz Bar-Noy and Danny Dolev. Shared-memory vs. message-passing in\nan asynchronous distributed environment. In Proc. 8th Annual ACM Sym-\nposium on Principles of Distributed Computing (PODC) , pages 307–318.\nACM, 1989.\n[26] Michael A. Bender and Seth Gilbert. Mutual Exclusion with O(log2logn)\nAmortized Work. In Proc. 52nd IEEE Symposium on Foundations of Com-\nputer Science (FOCS) , pages 728–737, 2011.\n[27] Elizabeth Borowsky and Eli Gafni. Immediate atomic snapshots and fast re-\nnaming. In Proc. 12th Annual ACM symposium on Principles of Distributed\nComputing (PODC) , pages 41–51. ACM, 1993.\n[28] Alex Brodsky, Faith Ellen, and Philipp Woelfel. Fully-adaptive algorithms\nfor long-lived renaming. In Proc. 20th International Symposium on Dis-\ntributed Computing (DISC) , pages 413–427, 2006.\n[29] James E. Burns and Gary L. Peterson. The ambiguity of choosing. In PODC\n’89: Proceedings of the eighth annual ACM Symposium on Principles of\ndistributed computing , pages 145–157, New York, NY , USA, 1989. ACM.\n[30] Armando Castañeda and Sergio Rajsbaum. New combinatorial topology\nbounds for renaming: the lower bound. Distributed Computing , 22(5-\n6):287–301, 2010.\n[31] Armando Castañeda and Sergio Rajsbaum. New combinatorial topology\nbounds for renaming: The upper bound. Journal of the ACM , 59(1):3, 2012.\n[32] Soma Chaudhuri, Maurice Herlihy, and Mark R. Tuttle. Wait-free implemen-\ntations in message-passing systems. Theor. Comput. Sci. , 220(1):211–245,\n1999.\n[33] Bogdan S. Chlebus and Dariusz R. Kowalski. Asynchronous exclusive selec-\ntion. In PODC ’08: Proceedings of the twenty-seventh ACM symposium on\nPrinciples of distributed computing , pages 375–384, New York, NY , USA,\n2008. ACM.\n[34] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Cli \u000bord\nStein. Introduction to Algorithms . The MIT Press, 3rd edition, 2009.\n[35] Carole Delporte-Gallet, Hugues Fauconnier, Eli Gafni, and Leslie Lamport.\nAdaptive register allocation with a linear number of registers. In Yehuda\nAfek, editor, Distributed Computing - 27th International Symposium, DISC\n2013, Jerusalem, Israel, October 14-18, 2013. Proceedings , volume 8205 of\nLecture Notes in Computer Science , pages 269–283. Springer, 2013.\n[36] Edgser W. Dijkstra. Solution of a problem in concurrent programming con-\ntrol. Communications of the ACM , 8(9):569–, September 1965.\n[37] Wayne Eberly, Lisa Higham, and Jolanta Warpechowska-Gruca. Long-lived,\nfast, waitfree renaming with optimal name space and high throughput. In\nDISC , pages 149–160, 1998.\n[38] Alan David Fekete. Asymptotically optimal algorithms for approximate\nagreement. Distributed Computing , 4:9–29, 1990.\n[39] Faith Ellen Fich, Danny Hendler, and Nir Shavit. Linear lower bounds on\nreal-world implementations of concurrent objects. In Proc. 46th IEEE Sym-\nposium on Foundations of Computer Science (FOCS) , pages 165–173, 2005.\n[40] Maryam Helmi, Lisa Higham, and Philipp Woelfel. Space bounds for adap-\ntive renaming. In Distributed Computing - 28th International Symposium,\nDISC 2014, Austin, TX, USA, October 12-15, 2014. Proceedings , pages 303–\n317, 2014.\n[41] Maurice Herlihy. Wait-free synchronization. ACM Transactions on Pro-\ngramming Languages and Systems , 13(1):123–149, January 1991.\n[42] Maurice Herlihy and Nir Shavit. The topological structure of asynchronous\ncomputability. Journal of the ACM , 46(2):858–923, 1999.\n[43] Maurice Herlihy and Nir Shavit. The art of multiprocessor programming .\nMorgan Kaufmann, 2008.\n[44] Yong-Jik Kim and James H. Anderson. A time complexity lower bound for\nadaptive mutual exclusion. Distributed Computing , 24(6):271–297, 2012.\n[45] Donald E. Knuth. The art of computer programming, volume 3: (2nd ed.)\nsorting and searching . Addison Wesley Longman Publishing Co., Inc., Red-\nwood City, CA, USA, 1998.\n[46] Shay Kutten, Rafail Ostrovsky, and Boaz Patt-Shamir. The Las-Vegas Pro-\ncessor Identity Problem (How and When to Be Unique). J. Algorithms ,\n37(2):468–494, 2000.\n[47] Leslie Lamport. A fast mutual exclusion algorithm. ACM Trans. Comput.\nSyst., 5(1):1–11, January 1987.\n[48] Leslie Lamport, Robert Shostak, and Marshall Pease. The byzantine gener-\nals problem. ACM Trans. Program. Lang. Syst. , 4(3):382–401, July 1982.\n[49] Richard J. Lipton and Arvin Park. The processor identity problem. Inf.\nProcess. Lett. , 36(2):91–94, October 1990.\n[50] Nancy A. Lynch. Distributed Algorithms . Morgan Kaufmann, 1996.\n[51] Mark Moir and James H. Anderson. Wait-free algorithms for fast, long-lived\nrenaming. Sci. Comput. Program. , 25(1):1–39, October 1995.\n[52] Mark Moir and Juan A. Garay. Fast, long-lived renaming improved and\nsimplified. In Proc 10th International Workshop on Distributed Algorithms\n(WDAG) , pages 287–303. Springer-Verlag, 1996.\n[53] Michael Okun. Strong order-preserving renaming in the synchronous mes-\nsage passing model. Theor. Comput. Sci. , 411(40-42):3787–3794, 2010.\n[54] Alessandro Panconesi, Marina Papatriantafilou, Philippas Tsigas, and Paul\nM. B. Vitányi. Randomized naming using wait-free shared variables. Dis-\ntributed Computing , 11(3):113–124, 1998.\n[55] John Tromp and Paul Vitányi. Randomized two-process wait-free test-and-\nset.Distributed Computing , 15(3):127–135, 2002.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CK6UHUPatsU",
"year": null,
"venue": "Bull. EATCS 2014",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/292/274",
"forum_link": "https://openreview.net/forum?id=CK6UHUPatsU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Analog and Hybrid Computation: Dynamical Systems and Programming Languages",
"authors": [
"André Platzer"
],
"abstract": "Analog and Hybrid Computation: Dynamical Systems and Programming Languages",
"keywords": [],
"raw_extracted_content": "TheLogic in Computer Science Column\nby\nYuriGurevich\nMicrosoft Research\nOne Microsoft Way, Redmond WA 98052, USA\[email protected]\nAnalog and Hybrid Computation :\nDynamical Systems and\nProgramming Languages\nAndré Platzer\nComputer Science Department\nCarnegie Mellon University\nPittsburgh, USA\[email protected]\nAbstract\nThe purpose of this article is to serve as a light-weight introduction into the mys-\nteries of analog and hybrid computing models from a dynamical systems and pro-\ngramming languages perspective. Hybrid systems are the dynamical systems that\ncombine both models of computation, i.e., have interacting discrete and continuous\ndynamics. They have found widespread application as models for embedded com-\nputing in embedded systems as well as in cyber-physical systems. The primary role\nhybrid systems have played so far is to allow us to model how a (discrete) computer\ncontroller interacts with the (continuous) physical world and to analyze by means of\nformal proofs or reachability analyzes whether this interaction is safe or not. Without\nany doubt, such analyzes are of tremendous importance for our society, because they\ndetermine whether we can bet our lives on those systems.\nBut this article argues that hybrid systems also have computational consequences\nthat make them an interesting subject to study from a computability theory perspec-\ntive. Hybrid systems are described by hybrid programs or hybrid automata, both\nhybrid generalizations of corresponding discrete computational models. The phe-\nnomenon of discrete and continuous interplay, which hybrid systems provide, is fun-\ndamental and raises interesting computability questions. For example: what is com-\nputable using the analogue computation capabilities of continuous dynamical sys-\ntems? How do the discrete computation capabilities of discrete dynamical systems\nrelate to classical models of computation à laChurch–Turing? What happens in hy-\nbrid computation, where discrete and continuous computation interact? Are the two\nfacets of computation, discrete and continuous, of fundamentally di \u000berent character\nor are they two sides of the same computational coin? This article answers some of\nthese questions using the rich theory that a logical characterization of hybrid systems\nin di\u000berential dynamic logic of hybrid programs provides. But the article is meant\nprimarily as a manifesto for the significance and inherent beauty that these questions\npossess in the first place.\n1 Introduction\nEmbedded computing may be the “third revolution in information technology after the\nbirth of the computer itself and the introduction of the hyper-connected world of the Inter-\nnet” [1]. This third revolution is connecting all computational power to the physical world\nand is raising the challenge of understanding how physics and computing interact. The in-\nteraction of physics and computation mixes analog and digital and is important not just in\nself-driving cars but also in aerospace applications, railway, robotics, and advanced medi-\ncal devices. Hybrid systems [2–14] have been developed for the purpose of understanding\nsuch combinations of discrete and continuous dynamics. Hybrid systems play a major\nrole in approaches for studying whether embedded computing systems and cyber-physical\nsystems satisfy crucial safety properties [15–19]. Answering such correctness questions\nis, without any doubt, crucial to find out whether we can bet our lives on those systems,\nwhich is what we do every time we get on an airplane or recently-built car.\nThis article serves as a light-weight introduction into the mysteries of hybrid computa-\ntion and hybrid systems from a dynamical systems and programming languages perspec-\ntive. Its focus is on the impact that discrete and analog computation as well as discrete\nand continuous dynamical e \u000bects have on those systems. For example, while discrete\nand continuous systems first appear to be of fundamentally di \u000berent character, which was\nthe motivation for developing hybrid systems in the first place, they later turn out to be\nsurprisingly intimately related [14]. The theory of hybrid systems builds a logical com-\nputational bridge between discrete and continuous systems (Fig. 1), bringing them into\nperfect proof-theoretical alignment [14]. This article is primarily meant as a manifesto for\nthe significance and beauty of the intriguing questions related to a computational view on\nhybrid systems.\nThis article is based on previous work [12–14, 20] to which we refer for more details.\nThe article serves as a gentle introduction with an explicit alignment with the theory of dy-\nnamical systems, based on prior work [20]. It also highlights the unnecessary complexities\nthat the shortcomings of hybrid time domains cause and advocates for a simpler approach\nto hybrid systems that is based on programming languages and logic.\nSystemContinuous DiscreteHybrid\nHybrid\nTheory\nContin.Theory\n DiscreteTheoryFigure 1: The proof theory of hybrid systems provides a complete proof-theoretical bridge\naligning the theory of discrete systems and the theory of continuous systems\nStructure of this Article. Section 2 starts with a light-weight introduction to general\ndynamical systems, discrete dynamical systems, continuous dynamical systems, and then\nillustrates important phenomena in hybrid systems. Section 3 discusses a programming\nlanguage for hybrid systems, whose discrete and continuous fragments correspond to com-\nputational models for discrete dynamical systems and for continuous dynamical systems,\nrespectively. Section 4 reviews a logical characterization of hybrid systems in di \u000berential\ndynamic logic [9]. Section 5 investigates the nature of hybridness by relating discrete and\ncontinuous dynamics by way of their common generalization as hybrid systems. Section 6\nwraps up with concluding remarks and discusses interesting possibilities for future work.\n2 Dynamical Systems\nIn this section, we survey the basic principles behind a number of important classes of\ndynamical systems. For a more comprehensive and more general overview and further\nextensions of dynamical systems, we refer to the prior work that this section is based on\n[20]. The theory of dynamical systems has been pioneered by Henri Poincaré [21].\n2.1 General Dynamical Systems\nA dynamical system [22, 23] is a mathematical model describing how a system changes\nits state over time. In a nutshell, a dynamical system1is a function ':T\u0002X!X of time\n1Formally, a dynamical system is an action of a monoid Ton a state spaceX. But this more general\nconcept is not needed in this article.\nTand stateXwhose value 't(x)2Xat time t2Tdenotes the state that the system has at\ntime twhen it originally started in the initial state x2X. The system starts at the initial\nstate'0(x)=xat time 0 and the evolution can proceed in stages, i.e., 't+s(x)='s('t(x))\nfor all s;t2Tand all x2X; see Fig. 2. That is, if the dynamical system starts at xand\nevolves for time tto reach't(x) and, from that state, evolves again for time sto reach\n's('t(x)), then it reaches the same state 't+s(x) by simply evolving for time t+sstarting\nfrom the initial state xright away.\nx\n 't(x)\n 't+s(x)'t+s\n't 's'0\nFigure 2: Dynamical systems can evolve in stages\nDi\u000berent choices of the time domain Tand the state space Xlead to di \u000berent classes\nof dynamical systems. The time domain Tis classically either discrete time ( T=N\norT=Z), which proceeds in separate discrete steps, or continuous time ( T=Ror\nT=[0;1)), which has a dense continuous notion of progress of time. The state space X\nis typically a vector space such as Rdwhere d2Nis the dimension of the system.\n2.2 Discrete Dynamical Systems\nDiscrete dynamical systems [23] have an integer notion of time (e.g., T=NorT=Z) so\nthat the state evolves in discrete time steps, one step at a time, as typically described by a\ndi\u000berence equation or discrete state transition function. That is, one thing happens after\nthe other in clearly discernible steps. Classical computer programs, for example, proceed\nin such discrete successive steps, with one computation step at a time.\nBasic concept. Adiscrete dynamical system\n'n+1(x)=f('n(x)) ( n2N) (1)\nis fully described by its generator f :X!X or transition function, where x2X is its\ninitial state and 'n(x) the state at time n2Nafter having started from initial state x2X.\nThat is, the generator fspecifies which next state f(x) the discrete dynamical system\nreaches after one step when it is currently in state x. The discrete dynamical system keeps\non making steps according to the generator f. It will run as follows\nx='0(x)f7!'1(x)f7!'2(x)f7!'3(x)f7!:::\nIn other words, when fndenotes the n-fold composition of f(sofn+1(x)=f(fn(x)) and\nf0(x)=x), then the discrete dynamical system 'will run as\nxf7!f(x)f7!f2(x)f7!f3(x)f7!:::\nExample 2.1 (Mandelbrot set) .One simple example of a discrete dynamical system comes\nfrom the context of Mandelbrot fractals, where a simple discrete operation is repeated\nover and over again and its long-term behavior defines whether a point lies in that set\nor not. The Mandelbrot set is the set of all complex numbers c2Cfor which fn(0)\nis bounded for all iterations nof the function f(z)=z2+c. Recall that i2=\u00001, so\nf(x+yi)=(x+yi)2+c=(x2\u0000y2)+2xyi+cfor a complex number z=x+yiwith real\npart x2Rand imaginary part y2R. Hence, the generator corresponding to the complex\nnumber c=a+biis the function\nf(x+yi)=(x2\u0000y2)+2xyi+c=(x2\u0000y2+a)+(2xy+b)i\nWhen considering fas a real function of two real arguments x;yinstead of one complex\nargument z, this yields:\nf(x;y)=(x2\u0000y2+a;2xy+b)\nThe Mandelbrot set is the set of parameters ( a;b)2R2for which the dynamical system\n(0;0)f7!f(0;0)f7!f2(0;0)f7!f3(0;0)f7!:::\ncorresponding to the above generator fis bounded (it can be shown that the bound 2 is\nsu\u000ecient). The initial trajectory shown in Fig. 3(left) for the parameter a=\u00000:6;b=\u00000:2,\nfor example, indicates that the state of the dynamical system stays bounded, which, indeed,\nit will remain forever in this case. The initial trajectory shown in Fig. 3(right) for the\nparameter a=0:41;b=0:3, however, will diverge, because it already leaves the Euclidean\nnorm bound 2.\nNote that the full behavior of a discrete dynamical system is determined entirely by\nits local generator f, which describes a step, plus the initial state, e.g., (0 ;0) in the case\nof the Mandelbrot system. It is still very complex to find out the global behavior of the\ndynamical system in the long run, but locally in one step, it is precisely captured by f.\n12\n34\n56\n78\n910\n1112 13/Minus0.6/Minus0.5/Minus0.4/Minus0.3/Minus0.2/Minus0.1x\n/Minus0.20/Minus0.15/Minus0.10/Minus0.050.05y\n123 4\n56789\n1011\n12\n13/Minus1.5/Minus1.0/Minus0.5 0.5 1.0 1.5 2.0x\n/Minus2.0/Minus1.5/Minus1.0/Minus0.50.51.0yFigure 3: Trajectory of the Mandelbrot dynamical system for a=\u00000:6;b=\u00000:2(left)\nand for a=0:41;b=0:3(right) up to n=13.\nDi\u000berence equations. Another common way of describing the local generator of a dis-\ncrete dynamical systems is by a di \u000berence equation. When defining h(x) :=f(x)\u0000xthe\ndiscrete dynamical system (1) can be described equivalently by the di\u000berence equation\n'n+1(x)\u0000'n(x)=h('n(x)) ( n2N) (2)\nwhenever the state space Xis a vector space so that subtraction of states is defined. Both\nformulations, (1) and (2) are equivalent. The latter emphasizes the local change hof the\nstate from one step to another as a function of the current state while the former emphasize\nthe local state update f, instead. The vector from nton+1 shown in Fig. 3 directly\nillustrates the respective value of h('n(0)), for example. Since there is a direct bijection\nbetween discrete dynamical systems in explicit form (1) and di \u000berence equations (2), both\nare often referred to informally as di \u000berence equations even if this is technically not quite\ncorrect.\nComputational models. Computation processes can be described by discrete dynamical\nsystems, for example. A computer system would start in an initial state '0(x)=xat a time\n0, perform a transition to a new state '1(x)=f(x) at a time 1, then another transition to\na state'2(x)=f(f(x)) at time 2, etc. until the computation terminates at a state 'n(x) at\nsome time n. The scaling unit of these integer time steps is not relevant, but could be\nchosen, e.g., as the cycle time of a processor or discrete controller.\nIt is worth noting, however, that the dynamical systems induced by classical computer\nprograms are both time- and space-discrete dynamical systems. That is, in addition to\nhaving a discrete time domain T=N, they also operate over a discrete state space Xsuch\nasX=Zd. In fact, when looking more closely, actual computers have finite memory so\nthatXwill even be a large but finite state space such as X=f0;1gd. Program models and\nautomata models have been used to describe discrete dynamical systems and have been\nused very successfully in verification [24–26].\nIn fact, the local generator f(respectively hwhen in di \u000berence equation form) needs\nto be su \u000eciently computational in order to have a chance of being used for any analytic\npurposes. Local generators often come from the transition function of a classical discrete\ncomputer program or the transition function of an automaton. But they can also be de-\nscribed using programs or machine models in more general models of computation such\nas the Blum-Shub-Smale model often called “real Turing machines” even if it is a random\naccess machine [27]. In that case, the state space is some finite-dimensional real vector\nspaceX=Rd, because real Turing machines compute with real-valued data, but the time\ndomain is still discrete T=N. The computation of the generator for the Mandelbrot\ndynamical system can be described by a such real Turing machine [27]. It is, however,\nundecidable whether a point a+biis in the Mandelbrot set, which corresponds to whether\nthe Mandelbrot system for a;balways stays bounded, even in Blum-Shub-Smale’s strong\nmodel of real computation [27]. Like everywhere else in computer science, it is, thus,\nimperative to distinguish between sets and their computational representation.\nOther successful models of real computation are type II computable functions from the\nframework of computable analysis [28, 29], which, in a nutshell, study functions that can\nbe computed up to arbitrary precision. Unlike non-quality, equality of real numbers, for\nexample, is not type II computable, because, when two real numbers are di \u000berent, we will\nultimately find out by comparing their digits. But if they are the same, we will have to\nkeep on comparing their digits for we will never be sure whether the next digit exhibits a\ndi\u000berence or not.\nNondeterministic discrete dynamical systems. Discrete dynamical systems are de-\nscribed by transition functions, which makes them deterministic, i.e., for any initial state\nxand any time n2Nthe discrete dynamical system will be in exactly one state 'n(x).\nThis is at odds with understanding nondeterministic discrete systems, in which an initial\nstate can have multiple successor states, because dynamical systems are supposed to be\n(deterministic) functions satisfying the staging property depicted in Fig. 2. For the staging\nproperty,'n(x) has to have a unique value determined only by nandxand the dynami-\ncal system at hand, otherwise 's('t(x)) does not have to agree with 't+s(x) if't(x) were\nallowed to take on di \u000berent values nondeterministically.\nWith a slight change in perspective, however, dynamical systems are equally useful\nfor understanding nondeterministic discrete systems by going set-valued. The behavior\nof systems with a discrete state transition relation R\u0012X\u0002X between previous states and\nsuccessor states is nondeterministic, but can still be captured as a discrete dynamical sys-\ntem using the powerset 2Xas the state space instead of X:\n'n+1(X)=f('n(X))=fy:x2'n(X) and ( x;y)2Rg(n2N)\nwhen starting from a set X\u0012 X of initial states. This principle is reminiscent of the\npowerset construction that converts nondeterministic finite automata into deterministic fi-\nnite automata by considering a transition function on sets of states instead of a transition\nrelation on individual states [30].\nLimits of discrete dynamical systems. However useful discrete dynamical systems are,\nthey cannot describe continuous processes, except as approximations at discrete points in\ntime, e.g., with a uniform discretization grid1\nnat the discrete points in time0\nn;1\nn;2\nn;:::;n\nn.\nDiscrete-time approximations give limited information about the behavior in between the\ni\nn, which causes fundamental di \u000berences [31] but also surprising similarities [14].\n2.3 Continuous Dynamical Systems\nContinuous dynamical systems have a real continuous notion of time (e.g. T=R\u00150orT=\nR) so that the state evolves continuously along a function of real time, typically described\nby a di \u000berential equation. The state of the system 't(x) then is a function of continuous\ntime t. In particular, unlike discrete dynamical systems, continuous dynamical systems\nhave no notion of “next state” or “next time”, because the time domain is (topologically)\ndense with a dense ordering relation <.\nBasic concept. Thecontinuous dynamical system\nd't(x)\ndt=f('t(x)) ( t2R)\n'0(x)=x\nis fully described by its generator f :X!X , where x2Xis the initial state at time 0. De-\npending on the duration of the solution of the above di \u000berential equationd't(x)\ndt=f('t(x)),\nthe continuous system may only be defined on some open subinterval of Rrather than\nglobally on R. The time-derivatived\ndtis only well-defined under additional assumptions,\ne.g., thatXis a di \u000berentiable manifold [22, 32] or simply some d-dimensional Euclidean\nspace Rd, which is what this article assumes. Many physical processes are continuous\ndynamical systems described by di \u000berential equations.\nExample 2.2 (Motion with constant velocity along a straight line) .The movement of the\nlongitudinal position of a car of velocity vdown a straight road from initial position p0can\nbe described by the di \u000berential equation p0(t)=vwith initial value p(0)=p0. The state\nof the dynamical system at time tthen is the solution 't(p0)=p0+tv, which is defined at\nall times t2R.\nExample 2.3 (Accelerated motion along a straight line) .The evolution of the state of a\ncar accelerating with acceleration aon a straight line from initial position p0and initial\nvelocity v0can be described by the di \u000berential equation system p0(t)=v(t);v0(t)=awith\ninitial value p(0)=p0,v(0)=v0. The state of that dynamical system at time tis then the\nvectorial solution\n't((p0;v0))=\u0012\np0+tv0+a\n2t2;v0+at\u0013\n(3)\nThe notation p0fordp(t)\ndtis a common simplification, as is the implicit use of vinstead of\nv(t). Thus, the di \u000berential equation system for the accelerated car would often be written:\np0=v\nv0=a(4)\nExample 2.4 (Time square oscillator) .A simple example of a continuous dynamical sys-\ntem is described by the following di \u000berential equation\nx0=t2y\ny0=\u0000t2x\nt0=1(5)\nThe initial trajectory shown in Fig. 4(left) for the initial value x=0;y=1;t=0 illustrates\nthat the dynamical system stays bounded but oscillates increasingly fast. In this case, the\nsolution is\nx(\u001c)=sin \u001c3\n3!\ny(\u001c)=cos \u001c3\n3!\nt(\u001c)=\u001c(6)\nExample 2.5 (Damped oscillator) .Another example of a continuous dynamical system is\ndescribed by the following di \u000berential equation\nx0=y\ny0=\u00004x\u00000:8y(7)\nxy\n1 2 3 4 5 6\n/Minus1.0/Minus0.50.51.0\nx\ny1 2 3 4 5 6\n/Minus1.5/Minus1.0/Minus0.50.51.0Figure 4: Trajectory of the time square oscillator for initial state x=0;y=1;t=0(left)\nand of the damped oscillator for initial state x=1;y=0(right) up to time 6 :5\nThe initial trajectory shown in Fig. 4(right) for the initial value x=1;y=0 illustrates\nthat the dynamical system decays over time. In this case, the explicit global solution\nrepresenting the dynamical system is more di \u000ecult.\nMore details and many more examples of continuous dynamical systems can be found\nin the literature [22, 32].\nComputational models. Continuous processes can be described by the di \u000berential equa-\ntions generating continuous dynamical systems. Just like discrete dynamical systems,\nwhich need to have suitable computational descriptions (e.g. by programs) in order to\nhave a chance of being used for analytic purposes, continuous dynamical systems also\nneed su \u000eciently computational descriptions.\nOne way of describing a continuous dynamical system in a computational model is to\ngive a computational description of the system 't(x) as a function of initial state xand time\nt. The motion with constant velocity from Example 2.2, for instance, can be described by\na linear solution 't(p0)=p0+tv. The accelerated motion from Example 2.3 can be de-\nscribed by the polynomial solution (3). Both symbolic expressions (linear and polynomial\nterms) are easily represented as arithmetic terms on a computer and their values can be\ncomputed easily, e.g., for every rational2p0;t2Q.\nThat principle does not extend to Example 2.4, because its solution (6) is not polyno-\nmial. Even at rational t2Q, the value of the solution can only be approximated, because\nof the infinite power series sin x=P1\nn=0(\u00001)n\n(2n+1)!x2n+1and likewise for cos. In computational\n2Computations on bigger fields are possible, for example, for real algebraic p0;t2¯Qusing real algebraic\nnumber computations. Approximate computations are still possible for computable real numbers in the\nextended sense of increasingly fine approximations of type II computability in computable analysis [28, 29].\nPolynomial computations for reals p0;t2Rare allowed in Blum-Shub-Smale’s computational model [27].\nmodels for the reals that tolerate approximate answers, sin and cos are still computable\n[29], just only approximately so in the sense of type II computable analysis. For “most”\ndynamical systems, the situation is even more dire, because there is not even a closed-form\nsymbolic solution of their di \u000berential equation at all. At least in those cases, the di \u000ber-\nential equation itself is a better computational representation of the continuous dynamical\nsystem. In fact, we argue that the di \u000berential equation is always a better computational\nrepresentation, because the beautiful local perspective of di \u000berential equations is ruined\nwhen working with its complicated global solutions.\nUnder certain assumption, there are ways of computing approximate solutions of ini-\ntial value problems of di \u000berential equations by numerical integration [14, 33, 34]. This\ndepends crucially on additional assumptions on the system [31], such as known Lipschitz\nbounds or indirectly via known moduli of continuity in the case of type II computable\nfunctions [28, 29]. Otherwise, all relevant problems are undecidable even in strong mod-\nels of computation even when tolerating arbitrarily large error bounds in the decision [31].\nType II computable functions in the sense of computable analysis [28, 29] have been\nidentified [35] with a generalized understanding of Shannon’s General Purpose Analog\nComputer (GPAC) [36] and with initial value problems of polynomial di \u000berential equa-\ntions [35]. GPACs were originally meant as the mathematical model for the di \u000berential\nanalyzer computer [37]. See Graça and Costa [38] for relations of GPACs to Moore’s real\nrecursive functions [39]. See Bournez et al. [35] for relations identifying GPACs, polyno-\nmial di \u000berential equations, and computable analysis when allowing for convergence [35]\nwhen, instead, considering a notion of computability for the GPACs that is based on con-\nvergence to the output in the limit with computable error bounds as considered in modern\ncomputability over the reals. Adding infinite convergent computations to the Blum-Shub-\nSmale model [27] has been considered in analytic machines [40]. Generalizations of finite\nautomata from discrete time to continuous time have been considered as well [41] based\non work by Trakhtenbrot [2001].\nLimits of continuous dynamical systems. Continuous dynamical systems are continu-\nous, so they have a hard time representing sudden discrete transitions. Discrete transitions\nlead to discontinuities, which lead to interesting but very complicated generalized notions\nof weak solutions, including Carathéodory solutions [34], Filippov solutions, Krasovskij\nsolutions, and Hermes solutions; see Hájek for an overview [43].\nNondeterministic continuous dynamical systems. Nondeterminism is not a phenomenon\nthat can only happen in discrete dynamical systems, but also in continuous dynamical sys-\ntems; see [44] for an interesting perspective relating nondeterminism in continuous sys-\ntems to the physical Church-Turing thesis. The most frequent source of nondeterminism\nwhen working with continuous dynamical system comes from nondeterminism in the ini-\ntial state while the rest of the continuous dynamical system stays deterministic. How long\na continuous system is being followed is another important source of nondeterminism in a\ncontext where di \u000berential equations are embedded within hybrid systems.\nAnother source of nondeterminism directly in the continuous dynamical system itself\ncomes from di \u000berential inequalities [34] or more general di \u000berential-algebraic constraints\nthat also support nondeterministic disturbances [10]. In both cases, p0\u0014vwould, for ex-\nample, be a di \u000berential inequality describing that position pevolves with at most velocity\nv, possibly less. Likewise, the di \u000berential inequality 1 \u0014p0\u0014vdescribes a continuous\ndynamical system whose position changes with at most velocity vbut at least velocity 1. It\ncan have di \u000berent velocities at di \u000berent times, but is still restricted to be continuous, often\neven continuously di \u000berentiable (unlike in Carathéodory solutions [34] and Filippov solu-\ntions [45]). As in discrete dynamical systems, the fact that there is no unique velocity still\nresults in a set-valued dynamical system 'to represent the nondeterminism as a function.\n2.4 Hybrid Systems\nBoth discrete and continuous dynamical systems are useful and have their respective ad-\nvantages depending on the situation that they model. Of course, there is no reason to\nbelieve that a given scenario only involves features that discrete dynamical systems are\ngood at, or only features where continuous dynamical systems shine. More often than not,\nboth features interact, and neither discrete nor continuous systems alone are a good fit for\nan application. In that case, hybrid dynamical systems are helpful, because they allow\nboth discrete and continuous dynamics at once. Control decisions in systems are often of\na more discrete nature, because they can be triggered suddenly, possibly by computerized\ncontrollers in response to certain events in the environment, while physical motion is a\ncontinuous phenomenon. But there are many other sources of hybridness as well, includ-\ning fast physical processes that can suitably be abstracted by discrete dynamical systems.\nHybrid dynamical systems alias hybrid systems [2–14] are dynamical systems that\ncombine discrete dynamical systems and continuous dynamical systems. Discrete and\ncontinuous dynamical systems are not just combined side by side to form hybrid systems,\nbut they can interact in interesting ways. Part of the system can be described by dis-\ncrete dynamics (e.g., decisions of a discrete-time controller), other parts are described by\ncontinuous dynamics (e.g., continuous movement of a physical process), and both kinds of\ndynamics interact freely in a hybrid system (e.g., when the discrete controller changes con-\ntrol variables of the continuous side by appropriate actuators such as when changing the\nacceleration input for the continuous dynamics, or when the continuous dynamics deter-\nmines the values of sensor readings such as position or velocity for the discrete decisions).\nEmbedded systems and cyber-physical systems are often modeled as hybrid systems, be-\ncause they involve both discrete control and physical e \u000bects.\nt0t1t2 t3 t4-B-b0ABRAKING / ACCELERA TIONleader\nfollower\nt0t1t2 t3 t4VELOCITY\nt0t1t2 t3 t4\nTIMEPOSITIONt0 t1 t2 t3 t4\nt0 t1 t2 t3 t4\nt0 t1 t2 t3 t4A\n0\n-b\n-B\nFigure 5: Example\ntrajectory of a car\ncontrol system where\nthe follower collides\nwith the leader carA typical example of a hybrid system is a car that drives on a\nroad according to a di \u000berential equation for the physical motion.\nThis car is subject to discrete control decisions, where discrete\ncontrollers change the acceleration and braking of the wheels,\ne.g., when the adaptive cruise control or the electronic stability\nprogram takes e \u000bect. Figure 5 shows an example [46] how the\nacceleration of a car changes instantaneously by discrete control\ndecisions (top), and how the velocity and position evolve contin-\nuously over time (middle and bottom) in response to the control\ninput of acceleration. The situation in Fig. 5 illustrates bad control\nchoices, where the follower car brakes too late (at time t2) and then\ncrashes into the leader car at time t3. In particular, the follower car\nmade a bad decision to keep on accelerating at some point before\ntime t2, when it should have activated the brakes instead, because,\nat time t2, no control choice (within the physical acceleration lim-\nits\u0000btoAof the car) could still prevent the crash. This is one\nillustration of the phenomenon that bad control choices in the past\ncause unsafety in the future and that we need to verify our control\nchoices now by considering their possible dynamical e \u000bects in the\nfuture.\nWhen using hybrid systems instead of discrete dynamical sys-\ntems, neither is there a need to use unnatural discretizations for\ncontinuous phenomena, because full continuous dynamics is al-\nlowed in hybrid systems. Nor is there a need to represent the sys-\ntem dynamics with the interesting but complicated discontinuous\nCarathéodory [34], Filippov, Krasovskij, or Hermes solutions [43]\nto understand jumps in continuous processes coming from sudden\nchanges such as by decisions to activate the brakes. Discrete jumps are allowed directly as\nseparate elements in hybrid systems. So, separately, both e \u000bects are easy to understand.\nThe position changes continuously with the velocity, which changes continuously with the\nacceleration. And the acceleration is being decided by a computer controller. Each par-\ntial behavior alone is easy to understand and they just interact with one another to form a\nhybrid system. The overall system behavior can still be as complex as the original appli-\ncation demands. But the individual parts of the hybrid system have a simpler behavior that\ncan be understood and analyzed by easier means.\nMulti-dynamical systems. This phenomenon illustrates the keystone observation be-\nhind our philosophy of multi-dynamical systems [13, 20], i.e., the principle to understand\ncomplex systems as a combination of multiple elementary dynamical systems. The whole\npoint of multi-dynamical systems is that the pieces are easier than the full system. That ex-\nplains why multi-dynamical systems help tame the complexity of cyber-physical systems,\nbecause they understand systems in terms of their elementary parts, which are, by defi-\nnition, easier than the full system. This compositional understanding of multi-dynamical\nsystems carries over to their compositional analysis techniques [13, 20]. These techniques\nare based on proof steps that successively reduce a system to its parts and conclude cor-\nrectness of the full system from correctness of its parts by compositional proof rules.\nBasic concept. When formulating hybrid dynamical systems as a general dynamical\nsystem, we run into an immediate di \u000eculty. What is the time domain Tsupposed to be for\na hybrid system? It cannot be discrete N, because hybrid systems can evolve continuously\nwhile their di \u000berential equations take e \u000bect. It cannot be continuous R, either, though,\nbecause that does not fit to the discrete model of computation, one step at a time, that its\ndiscrete parts perform. In particular, a hybrid system might very well make a couple of\ndiscrete computation steps before proceeding with its continuous evolution again. Hence,\nthe time domain is some combination of discrete time Nand continuous time R. There are\ndi\u000berent possibilities for the time domain but they follow the same essential idea [12, 47].\nHybrid time domains [47] are some subset T\u001aR\u0002N, where the real component t2R\nof a hybrid time point ( t;j)2Tmeasures the progress in real time and the natural number\ncomponent j2Nmeasures the progress in time steps. Hybrid time domains are such that\nfor each j2Nthe set of all t2Rfor which ( t;j)2Tis some interval in the reals. While\nthere are a number of minor variations, such as whether the real intervals start at 0 or are\nconsecutive intervals, the only important feature of hybrid time domains is that a hybrid\ntime domain identifies a sequence of intervals. The complication is that the time domain\nT depends on the particular execution of the hybrid system and that executions of hybrid\nsystems are highly nondeterministic. Since useful intuitions of more general interest arise\nfrom the study of the impact of time in hybrid systems, we illustrate the basic concept of\na hybrid system by an instructive example.\nExample 2.6 (Bouncing ball) .Let us consider a bouncing ball; see Fig. 6. The bouncing\nball is flying through the air toward the ground, bounces back up when it hits the ground,\nand will again fly up. Then, as gravity wins over, it will fly down again for a second\nbounce, and so forth, leading to a lot of interesting physics including questions of how the\nkinetic energy transforms into potential energy as the ball deforms by an elastic collision\non the ground and then reverses the deformation to gain kinetic energy [48].\nFigure 6: Sample trajectory of a bouncing ball (plotted as position over real time)\nAlternatively, we can put our multi-dynamical systems glasses on and realize that the\nbouncing ball dynamics consists of two phases that, individually, are easy to describe and\ninteract to form a hybrid system. There is the flying part, where the ball does not do\nanything but move according to gravity.3And then there is the bouncing part, where the\nball bounces back from the ground. While there is more physics involved in the bouncing,\na simple description is that the bounce on the ground will make the ball invert its velocity\nvector (from down to up) and slow down a little (since the friction loses energy). Both\naspects separately, the flying and the bouncing, are easy to understand. They interact as\na hybrid system, where the ball flies continuously through the air until it hits the ground\nwhere it bounces back up by a discrete jump of its velocity from negative to positive.\nThe continuous flying part of a bouncing ball is easy to describe by a di \u000berential equa-\ntion, since the ball at height hwith vertical velocity vis falling subject to gravity g>0:\nh0=v;v0=\u0000g (8)\nThe discrete bouncing part instantaneously negates the velocity of the ball around with a\n3Taking the usual models of air resistance into account is not di \u000ecult either, but we refrain from doing\nso here for simplicity.\ncertain damping coe \u000ecient 0\u0014c<1:\nv:=\u0000cv\nThis discrete change that updates the value of vto that of\u0000cvonly happens when the ball\njust fell on the ground, which we posit is at height 0:\nif(h=0)v:=\u0000cv (9)\nWe postpone the question how to best represent how exactly the continuous flying dynam-\nics (8) and the discrete bouncing dynamics (9) interact to form a hybrid system until we\ndiscuss the modeling language for hybrid systems in Section 3.\ntj\n123456789101112\nt0 t1 t2 t3 t4 t5t6\nFigure 7: Hybrid time domain for the sample trajectory of a bouncing ball with discrete\ntime step jand continuous time t\nWhat we already observe about the bouncing ball is that its trajectory follows an alter-\nnating succession of a continuous trajectory following (8) for a certain nonzero duration\nand an instantaneous discrete jump following (9) at a discrete instant of time. This succes-\nsion of continuous and discrete transitions in Fig. 6 gives rise to the hybrid time domain T\nshown in Fig. 7. Here, the intervals are either compact intervals [ ti;ti+1] of positive dura-\ntionti+1\u0000ti>0 during which the ball is flying through the air continuously according to\n(8), or they are point intervals [ ti;ti] and a discrete transition happens at that single point\nin time that changes the sign and magnitude of the ball’s velocity by a bounce described\nin (9). For example, [ t1;t2] is the time interval during which the ball is flying after its first\nbounce. And the point interval [ t2;t2] represents the point in time during which the dis-\ncrete transition of bouncing happened. Fig. 8 shows the particular sample trajectory of the\nbouncing ball from Fig. 6 plotted on its corresponding hybrid time domain Tfrom Fig. 7.\nThat illustration separates out the various discrete and continuous pieces of the trajectory\nof the bouncing ball into separate fragments of the two-dimensional hybrid time.\nth\nj\n24681012\nt0 t1 t2 t3 t4t5t6Figure 8: Sample trajectory of a bouncing ball plotted as position hover its hybrid time\ndomain with discrete time step jand continuous time t\nThis particular illustration nicely highlights the hybrid nature of the bouncing ball\ndynamics. The downside, however, is that the hybrid domain Tshown in Fig. 7 is specific\nto the particular bouncing ball trajectory from Fig. 6 and Fig. 8 and does not fit to any other\nbouncing ball trajectories. This is in sharp contrast to the principles of general dynamical\nsystems':T\u0002X!X , in which the time domain Tand state spaceXfor'are supposed\nto be a single set for all trajectories, and not depend on the particular sample trajectory\nconsidered so far. This is one of the reasons why we do not adopt the approach of working\nwith hybrid times [47], but instead, leave time implicit in our models. If time is ever\nneeded in a system, it can simply be added as a dedicated clock variable cwith di \u000berential\nequation c0=1 to the model.\nHybrid systems are just highly nondeterministic, even in their notion of time, which is\na scenario to which programming language and formal language models are better adapted\nthan the general dynamical systems model. Even the interaction of discrete and continuous\ndynamics is often characterized by nondeterminism, since there is not always just one point\nin time where control can pass from discrete to continuous or back. Nondeterminism, of\ncourse, breaks the staging property illustrated in Fig. 2, and requires a set-valued treatment\nto recover if only a fixed time domain Tcould be found. Having said that, it is perfectly\npossible to fit hybrid dynamical systems into the model of general dynamical systems. All\nit takes is a more sophisticated notion of time that remembers all previous actions (similar\nto the actions in the operational semantics of hybrid games [49]) and allows permanent\nforking of the subsequent execution to di \u000berent futures. But these technical complications\nare unnecessary when working in a clean programming language (Section 3).\nBefore we give up on hybrid time domains, however, we illustrate two more phe-\nnomena that are worth noticing: subdivision and super-dense computations. While Fig. 7\nshows one hybrid time domain for the sample trajectory in Fig. 6, there are infinitely many\nother hybrid time domains that fit to the original sample trajectory shown in Fig. 6 and\njust subdivide one of the intervals of a flying phase into two subintervals during which the\nball just keeps on flying according to (8) the way it did before. The first flying phase, for\nexample, could just as well be subdivided into the continuous phase where the ball is fly-\ning up according to (8) followed by a continuous phase where the ball is flying down, still\naccording to (8). That would yield a di \u000berent hybrid time domain with multiple intervals\nof positive duration in immediate succession but still essentially the same behavior of the\nhybrid system in the end. So subdivision of time domains does not yield characteristically\ndi\u000berent behavior. Likewise, there can be hybrid systems that have multiple discrete steps\n(corresponding to point intervals in the hybrid time domain) in immediate succession be-\nfore a continuous transition happens again. For example, a car could, successively, switch\ngears and disable the adaptive cruise control system and engage a warning light to alert\nthe driver before it ceases control again to the continuous driving behavior. Hence, while\nstrict alternation of discrete and continuous transitions may be the canonical example to\nhave in mind, it is most definitely not the only relevant scenario.\nComputational models. Like in the case of all other dynamical systems, hybrid systems\nneed to be represented in suitable computational models to have a chance to be amenable\nto any form of computational analysis. There is a range of models for hybrid systems\n[50], including hybrid automata [51, 52] and its variations [53], process-algebraic models\n[54, 55], Petri nets [56], and programs [9–12]. All hybrid systems provide some form\nof discrete transitions and (various classes of) di \u000berential equations, but di \u000ber in terms\nof how those pieces are put together to form the hybrid systems. The representational\ndi\u000berences may have important impact on the ease of analysis but are not fundamental,\nbecause translations between the models are possible at least in some cases [12, 55, 56].\nNumerical approximation problem. What is important to realize for hybrid systems is\nthe permanent presence of the numerical approximation problem, which, in terms of its\nubiquity, is a numerical analogue of the halting problem. Verification of hybrid systems is\na very challenging problem. The verification problem is the problem to decide whether a\ngiven hybrid system satisfies a given correctness property. Unfortunately, this problem is\nundecidable even for very simple hybrid systems [5, 57]. Even for absurdly limited models\nof hybrid systems, the verification problem is neither semidecidable nor co-semidecidable\nnumerically, even for a bounded number of transitions and when tolerating arbitrarily large\nerror bounds in the decision [31]. Minimal black box models of hybrid systems that only\nsupport numerical evaluation of the system and its derivatives at points are insu \u000ecient,\nbecause they lead to numerical undecidability even when tolerating arbitrarily large er-\nror bounds. That is why some form of additional input or symbolic representations are\nrequired in order to guarantee that analysis results can be correct.\nx1 x2 x3B\n/CurlyPhi\ng\nFigure 9: Safe and unsafe indis-\ntinguishable by '(j)(xi) (for j\u00142)The basic intuition behind the numerical undecid-\nability result is shown in Fig. 9. Suppose an algorithm\ncould decide safety of a system numerically by evaluat-\ning the value of the system flow 'at points. If the algo-\nrithm is a decision algorithm, it would have to terminate\nin finite time, hence, after evaluating a finite number of\npoints, say x1;x2;x3in Fig. 9. But from the information\nthat the algorithm has gathered at a finite number of\npoints, it cannot distinguish the good behavior '(solid\nflow safely outside B) from the bad behavior g(dashed\nflow reaching bad region B). The same undecidability\nresult still holds even when restricting the flow 'to very special classes of functions and\nwhen assuming that its derivatives '(j)(xi) could be evaluated and even when tolerating\narbitrarily large error bounds in the decision. There is a series of extra assumptions and\nbounds that make the problem (approximately) decidable again by imposing extra con-\nstraints on the system. Yet, by the general undecidability result, these extra bounds (and\nseveral other bounds that have been proposed in related work) cannot be computed numer-\nically. Because of this strong numerical undecidability result, it is surprisingly di \u000ecult\nbut not impossible to get hybrid systems verification techniques sound using symbolic\nrepresentations and /or assuming knowledge of the behavior of the system on intervals\n[12, 16, 58].\nLimits of hybrid dynamical systems. Not all systems are hybrid systems. Some have\nmore general e \u000bects that pure hybrid systems cannot represent properly. Yet, there are\nmany interesting extensions of hybrid systems.\nDistributed hybrid systems [59–66] are dynamical systems that combine distributed\nsystems [26, 67, 68] with hybrid systems, and can, thus, model systems of systems aspects\nin hybrid systems (with their discrete and continuous dynamics). Distributed systems are\nsystems consisting of multiple computers that interact through a communication network.\nThey feature both (discrete) local computation and remote communication. Distributed\nhybrid systems, instead, consist of multiple hybrid systems that interact through a com-\nmunication network, but may also interact through physical interactions. Distributed hy-\nbrid systems include multi-agent hybrid systems and hybrid systems where the number of\nagents involved in the system evolves over time. Typical examples of distributed hybrid\nsystems are fleets of unmanned aerial vehicles or a platoon of cars on a highway.\nStochastic hybrid systems [62, 69–76] are dynamical systems that combine the dynam-\nics of stochastic processes [77–79] with hybrid systems. They either feature stochastic\ne\u000bects only during the discrete dynamics [69] or during the continuous dynamics [70] or\nboth [72, 73, 76]. Stochastic hybrid systems play a role when systems have a large degree\nof random noise and good probabilistic models are available for their distributions.\nHybrid games [49, 80–87] extend hybrid systems with adversarial e \u000bects coming from\nmultiple players with di \u000berent goals in the hybrid system. Hybrid games are relevant when\nit is important to understand how di \u000berent agents with di \u000berent goals might interact.\n3 Models of Computation: Hybrid Programs\nHybrid programs (HP) [9, 12, 14, 88] are a programming language for hybrid systems.\nHPs combine di \u000berential equations with conventional program constructs and discrete as-\nsignments. In order to highlight the design features of HPs, we first take a detour with a\nhybrid version of the programming language C.\nHybrid C. One way to think of HPs is to understand them as regular imperative pro-\ngrams that can additionally use di \u000berential equations as program statements. That intu-\nition goes a long way except that it misses out on the other important feature of hybrid\nsystems: their ubiquitous nondeterminism. We will, nevertheless, start this exposition\nfirst with this more narrow perspective of adding di \u000berential equations into conventional\ndiscrete programs and see where that gets us. To make things concrete, we consider a pro-\ngramming language with a notation akin to C, although any other imperative programming\nlanguage would work as well. Let us call this programming language Hybrid C , since it is\nessentially C with di \u000berential equations.\nThe first attempt of representing the bouncing ball Example 2.6 in Hybrid C could be:\nwhile (*) {\nif(h == 0) {\nv := -c*v;\n}\nh'=v,v'=-g;\n}\nThis Hybrid C program consists of a loop that will repeatedly check with an ifstatement\nwhether the height his zero and then reverse the velocity vby a discrete assignment.\nFor emphasis we use the notation :=for assignments to make sure they are not confused\nwith di \u000berential equations. The most obvious problem with this Hybrid C program is\nthat it is not clear when the while loop should stop, because it unclear how long the ball\nwill be bouncing. And even a system component stops moving, we might still want to\nconsider that a valid behavior for some while, e.g., until all other system components\nstopped as well. The right way of understanding hybrid systems is usually that they repeat\nnondeterministically any number of times, which we indicate by while (*)in Hybrid C.\nNow the next problem with the above Hybrid C program is that it is unclear how\nlong the system will follow the di \u000berential equation statement h'=v,v'=-g . Indeed, how\nlong exactly a system follows a continuous dynamics before a discrete step happens again\nis usually highly nondeterministic. Even for time-triggered architecture implementations\nthat are trying to operate at certain fixed frequencies, such as 10Hz, practice still holds phe-\nnomena like jitter in store, which cause variations in the time of operation. Indeed, for the\nbouncing ball, 10Hz or any other fixed sampling period would be unsuitable, because the\nsystem execution will never hit the interesting condition if(h == 0) that way.4Conse-\n4This problem is intimately related to the zero-crossing problem in numerical algorithms. Indeed,\nfloating-point algorithms approximating the executions of the Hybrid C program, e.g., by an Euler inte-\nquently, the natural mode for a di \u000berential equation is that it evolves for a nondeterministic\namount of time, just like while (*).\nYet, hold on, the above Hybrid C program would also get in trouble if the di \u000beren-\ntial equation evolved for too long. In that case, the ball would fall through the ground\nto a negative height ( h<0) and will then keep on falling forever, because the condition\nif(h == 0) will never be able to fire and rescue the ball by changing the sign of its\nvelocity again. That would be a sad loss of a perfectly reasonable bouncing ball. Con-\nsequently, di \u000berential equations need to be constrained to remain within certain regions\ncalled evolution domains. The relevant evolution domain for the bouncing ball is h\u00150,\nbecause physics constrains the ball to remain above the ground. The notation we will adopt\nto indicate that a continuous system follows a di \u000berential equation such as h0=v;v0=\u0000g\nonly within such an evolution domain is conjunctively (&) as follows:\nh0=v;v0=\u0000g&h\u00150\nBasic concept. Hybrid systems frequently exhibit nondeterminism in its various forms,\nincluding in the discrete control structure and continuous dynamics. Nondeterminism\nshould, thus, be a first-class citizen in hybrid systems programming languages. That is\nwhy the programming language of hybrid programs [9, 12, 14, 88] embraces nondeter-\nminism. In fact, hybrid programs make nondeterminism the norm and allow deterministic\nconstructs as abbreviations for certain patterns of nondeterministic program operators. All\nclassical programming constructs are definable in terms of the operators that hybrid pro-\ngrams provide.\nHPs form a Kleene algebra with tests [89], that is, they are formed like regular expres-\nsions [90] just with more di \u000ecult atomic programs instead of letters of a finite alphabet.\nAtomic HPs are instantaneous discrete jump assignments x :=\u0012,tests ?Hof a first-order\nformula5Hof real arithmetic, and di\u000berential equation (systems) x0=\u0012&Hfor a continu-\nous evolution restricted to the domain of evolution H, where x0denotes the time-derivative\nofx. Compound HPs are generated from atomic HPs by nondeterministic choice ( [), se-\nquential composition (;), and Kleene’s nondeterministic repetition (\u0003). As terms , we use\npolynomials with rational coe \u000ecients here, but divisions can be allowed as well when\nguarding against singularities of divisions by zero; see [9, 12] for details.\nDefinition 3.1 (Hybrid program) .HPs are defined by the following grammar ( \u000b;\f are\nHPs, xa variable,\u0012a term possibly containing x, and Ha formula of first-order logic of\ngration for the di \u000berential equation, will almost never satisfy the test if(h == 0) . Real executions of\nthe bouncing ball, though, have no trouble finding when the height is zero and reacting appropriately. This\nproblem is looming in some form or another in almost all simulation tools.\n5The test ? Hmeans “if Hthen skipelseabort ”.\nreal arithmetic):\n\u000b;\f ::=x:=\u0012j?Hjx0=\u0012&Hj\u000b[\fj\u000b;\fj\u000b\u0003\nThe first three cases are called atomic HPs, the last three compound. The testaction ? H\nis used to define conditions. Its e \u000bect is that of a no-op if the formula His true in the\ncurrent state; otherwise, like abort , it allows no transitions so that system cannot execute.\nThat is, if the test succeeds because formula Hholds in the current state, then the state\ndoes not change, but the system execution continues normally. If the test fails because\nformula Hdoes not hold in the current state, then the system cannot execute and such runs\nwith failed tests are discarded and not considered any further.\nNondeterministic choice \u000b[\f, sequential composition \u000b;\f, and nondeterministic rep-\netition\u000b\u0003of programs are as in regular expressions but generalized to a semantics in hybrid\nsystems. Nondeterministic choice \u000b[\fexpresses behavioral alternatives between the runs\nof\u000band\f. That is, the HP \u000b[\fcan choose nondeterministically to follow the runs of\nHP\u000b, or, instead, to follow the runs of HP \f. The sequential composition \u000b;\fmodels that\nthe HP\fstarts running after HP \u000bhas finished ( \fnever starts if \u000bdoes not terminate).\nIn\u000b;\f, the runs of \u000btake e \u000bect first, until \u000bterminates (if it does), and then \fcontin-\nues. Observe that, like repetitions, continuous evolutions within \u000bcan take more or less\ntime, which causes uncountable nondeterminism. This nondeterminism occurs in hybrid\nsystems, because they can operate in so many di \u000berent ways, which is as such reflected in\nHPs. Nondeterministic repetition \u000b\u0003is used to express that the HP \u000brepeats any number\nof times, including zero times. When following \u000b\u0003, the runs of HP \u000bcan be repeated over\nand over again, any nondeterministic number of times ( \u00150).\nExample 3.2 (Single car) .As an example, consider a simple car control scenario. We de-\nnote the position of a car by x, its velocity by v, and its acceleration by a. From Newton’s\nlaws of mechanics, we obtain a simple kinematic model for the longitudinal motion of the\ncar on a straight road, which can be described by the di \u000berential equation x0=v;v0=a.\nThat is, the time-derivative of position is velocity ( x0=v) and, simultaneously, the deriva-\ntive of velocity is acceleration ( v0=a). We restrict the car to never drive backwards by\nspecifying the evolution domain constraint v\u00150 and obtain the continuous dynamical\nsystem x0=v;v0=a&v\u00150. In addition, suppose the car controller can decide to accel-\nerate (represented by a:=A) or brake ( a:=\u0000b), where A\u00150 is a symbolic parameter for\nthe maximum acceleration and b>0 a symbolic parameter describing the brakes. The\nHPa:=\u0000b[a:=Adescribes a controller that can choose nondeterministically to brake\nor accelerate. Accelerating will only sometimes be a safe control decision, so the discrete\ncontroller in the following HP requires a test ? Hto be passed in the acceleration choice:\ncar s\u0011\u0000(a:=\u0000b[(?H;a:=A));x0=v;v0=a&v\u00150\u0001\u0003(10)\nThis HP, which we abbreviate by car s, first allows a nondeterministic choice of braking\nor acceleration (if the test Hsucceeds), and then follows the di \u000berential equation for an\narbitrary period of time (that does not cause vto enter v<0). The HP repeats nondeter-\nministically as indicated by the\u0003repetition operator. Note that the nondeterministic choice\n([) in (10) can nondeterministically select to proceed with a:=\u0000bor with ? H;a:=A. Yet\nthe second choice can only continue if, indeed, formula His true about the current state\n(then both choices are possible). Otherwise only the braking choice will run successfully,\nbecause the other choice will fail test ? Hso that that run will be discarded. With this\nprinciple, HPs elegantly separate the fundamental principles of (nondeterministic) choice\nfrom conditional execution (tests).\nWhich formula is suitable for Hdepends on the control objective or property we care\nabout. A simple guess for Hlikev<8 has the e \u000bect that the controller can only choose to\naccelerate at lower speeds. This condition alone is insu \u000ecient for most control purposes\nand will leave the car possibly unsafe.\nSemantics. HPs have a compositional semantics. We define their semantics by a reach-\nability relation and refer to previous work for their trace semantics [12, 91]. The transition\nsemantics of HP \u000bis a relation \u001a(\u000b) defining which final states are reachable from which\ninitial states by running \u000bto completion. That is, ( \u0017;!)2\u001a(\u000b) specifies that final state !is\nreachable from the initial state \u0017by executing HP \u000b. Astate\u0017is a mapping from variables\ntoR. The set of states is denoted S. We denote the value of term \u0012in\u0017by [ [\u0012] ]\u0017. The state\n\u0017d\nxagrees with \u0017except for the interpretation of variable x, which is changed to d2R. We\nwrite\u0017j=\u001fi\u000bthe first-order formula \u001fis true in state \u0017(as defined formally in Section 4).\nDefinition 3.3 (Transition semantics of HPs) .Each HP\u000bis interpreted semantically as a\nbinary reachability relation \u001a(\u000b)\u0012S\u0002S over states, defined inductively by\n\u000f\u001a(x:=\u0012)=f(\u0017;!) :!=\u0017except that [ [ x] ]!=[ [\u0012] ]\u0017g\nThat is, final state !di\u000bers from initial state \u0017only in its interpretation of the variable\nx, which!changes to the value that the right-hand side \u0012has in the initial state !.\n\u000f\u001a(?H)=f(\u0017;\u0017) :\u0017j=Hg\nThat is, the final state \u0017is the same as the initial state \u0017(no change) but there only\nis such a self-loop transition if test formula Hholds in\u0017, otherwise no transition is\npossible at all and the system is stuck because of a failed test.\n\u000f\u001a(x0=\u0012&H)=f('(0);'(r)) :'(t)j=x0=\u0012and'(t)j=Hfor all 0\u0014t\u0014rfor a so-\nlution': [0;r]!S of any duration rg\nThat is, the final state '(r) is connected to the initial state '(0) by a continuous func-\ntion of some duration r\u00150 that solves the di \u000berential equation and satisfies Hat all\ntimes, when interpreting '(t)(x0)def=d'(\u0010)(x)\nd\u0010(t) as the derivative of the value of xover\ntime [9].\n\u000f\u001a(\u000b[\f)=\u001a(\u000b)[\u001a(\f)\nThat is,\u000b[\fcan do any of the transitions that \u000bcan do as well as any of the\ntransitions that \fis capable of.\n\u000f\u001a(\u000b;\f)=\u001a(\f)\u000e\u001a(\u000b)=f(\u0017;!) : (\u0017;\u0016)2\u001a(\u000b);(\u0016;!)2\u001a(\f)g\nThat is,\u000b;\fcan do any transitions that go through any intermediate state \u0016to which\n\u000bcan make a transition from the initial state \u0017and from which \fcan make a transi-\ntion to the final state !.\n\u000f\u001a(\u000b\u0003)=[\nn2N\u001a(\u000bn) with\u000bn+1\u0011\u000bn;\u000band\u000b0\u0011?true.\nThat is,\u000b\u0003can repeat\u000bany number of times, i.e., for any n2N,\u000b\u0003can act like the\nn-fold sequential composition \u000bnwould.\nWe refer to a book [12] for a comprehensive background and for an elaboration how the\ncase r=0 (in which the only condition is '(0)j=H) is captured by the above definition for\ndi\u000berential equations. Time itself does not play a special role. Whenever a clock variable t\nis needed in a HP, it can be axiomatized by t0=1. Finally observe how easily the relational\nsemantics of HPs deals with the ubiquitous nondeterminism of hybrid systems. The same\nsimplicity can be obtained also for a trace semantics of hybrid programs that retains the\nintermediate states during hybrid trajectories [12, 91].\nExample 3.4. Continuing Example 3.2, Fig. 10a illustrates the structure of the transition\nsystem of (10) for the (unsafe) choice of H\u0011(v<8). Fig. 10b illustrates how one\nparticular transition from initial state \u0017to one final state follows the marked transitions\nthrough two iterations of the loop, which justifies ( \u0017;!)2\u001a(car s).\nDefinable operators. HPs only provide the logically fundamental operators of hybrid\nsystems. All classical WHILE programming constructs and all hybrid systems can be\ndefined from those fundamental operators [12] including the ones we alluded to in the\ndevelopment of the Hybrid C language. We, e.g., write x0=\u0012for the unrestricted dif-\nferential equation x0=\u0012&true. We allow di \u000berential equation systems and use vectorial\nnotation. Vectorial assignments are definable from scalar assignments and ; using auxiliary\nvariables.6Other program constructs can be defined easily [12]. For example, nondeter-\n6A vectorial assignment x1:=\u00121;:::; xn:=\u0012nis definable by ` x1:=x1;:::; `xn:=xn;x1:=`\u00121;:::;xn:=`\u0012n\nwhere `\u0012iis\u0012iwith xjreplaced by ` xjfor all j. Memorizing the old values of xjin `xjbefore assigning to\nxiis necessary for a simultaneous vectorial assignment if \u0012imentions another xj, which would already be\noverwritten if j<i.\na.\n\u0017\n[\na:=\u0000b\n?v<8\na:=A\n!x00=a\nb.\n\u0017v=9\nx=0\n\u0017\n[\n\u00161v=9\nx=0\na=\u00002\na:=\u0000b\n?v<8\nfails and\ncut o\u000b\n\u00161\na:=A\n\u00162v=7\nx=8\na=\u00002\nx00=a\nstay 1s\n\u00162\nv=7\nx=8\na=\u00002[\n\u00162a:=\u0000b\n?v<8\n\u00163\nv=7\nx=8\na=1a:=A\n!\nv=9\nx=24\na=1x00=a\nstay 2s\nFigure 10: Transition structure and transition example in simple car\nministic assignments of any real value to x, if-then-else statements, and while loops can\nbe defined by the following abbreviations, respectively:\nx:=\u0003\u0011x0=1[x0=\u00001\nif (H) then\u000belse\ffi\u0011(?H;\u000b)[(?:H;\f)\nif (H) then\u000b\u0011(?H;\u000b)[?:H\nwhile (H)\u000b\u0011(?H;\u000b)\u0003; ?:H(11)\nThe reason why if ( H) then\u000belse\ffi is the same as (? H;\u000b)[(?:H;\f), for example, is\nthat, after the nondeterministic choice, exactly one of the two tests ? Hand ?:Hwill suc-\nceed the other one will fail. Hence, even though the right-hand side of (11) starts out with\na nondeterministic choice, only one choice will ever work out from any current state. That\nis, the only possible nondeterministic choices that are not aborted and discarded because\nof failing a subsequent test are those in which Hholds and\u000bexecutes or in which :H\nholds and\fexecutes. Since the if-then-else makes this determinism apparent that is im-\nplicit in the mutual exclusiveness of the test conditions, if-then-else is directly supported\nin the implementation of dLin the theorem prover KeYmaera [17] even if it is not needed\nin theory.\nNondeterministic assignment x:=\u0003assigns any real number to the variable xand is\nfrequently used in hybrid system models to represent that arbitrary control choices are\npossible. Often, those arbitrary control choices are subsequently restricted to a possible\nrange using a test.\nExample 3.5 (Bouncing ball) .Continuing Example 2.6, consider a hybrid program model\nof the bouncing ball:\u0000\nif (h=0) then\nc:=\u0003; ?(0\u0014c<1);\nv:=\u0000cv\nfi;\nh0=v;v0=\u0000g&h\u00150\u0001\u0003\nThe if-then statement can be expanded using the definitions in (11), which leads to the\nhybrid program\u0000\n(?(h=0);\nc:=\u0003; ?(0\u0014c<1);\nv:=\u0000cv\n)[(?h,0);\nh0=v;v0=\u0000g&h\u00150\u0001\u0003\nObserve in both hybrid programs how the damping coe \u000ecient cis set to an arbitrary real\nnumber by way of c:=\u0003and then subsequently restricted by the test ?(0 \u0014c<1) to lie\nwithin the interval [0 ;1). The overall e \u000bect of c:=\u0003; ?(0\u0014c<1) is to assign an arbitrary\nreal number from [0 ;1) to c. This is a frequent modeling pattern to have a nondeterministic\nassignment followed by a test with the requisite range restrictions.\nHierarchies. Hybrid programs are designed as a minimal extension of conventional dis-\ncrete programs. They characterize hybrid systems succinctly by adding continuous evo-\nlution along di \u000berential equations as the only additional primitive operation to a regular\nbasis of conventional discrete programs. Their operations are interpreted over the domain\nof real numbers as required for hybrid systems. This gives rise to an elegant syntac-\ntic hierarchy [12] of discrete, continuous, and hybrid systems, for which the respective\nfragments of hybrid programs are a computational model, summarized in Table 1. The\nfragment consisting of just di \u000berential equations with evolution domain constraints corre-\nsponds to purely continuous dynamical systems [92]. The fragment of hybrid programs\nwithout di \u000berential equations corresponds to conventional discrete programs generalized\nover the reals or to discrete-time dynamical systems [93]. The fragment without discrete\nassignments corresponds to switched continuous systems [6, 93]. Only the composition of\nmixed discrete assignments and continuous evolutions gives rise to truly hybrid behavior.\nTable 1: Classification of hybrid programs and correspondence to dynamical systems\nHybrid program class Dynamical systems class\ndi\u000berential equations continuous dynamical systems\nno assignments switched continuous dynamical systems\nno di\u000berential equations discrete dynamical systems\nno di\u000berential equations, over N discrete while programs\ngeneral hybrid programs hybrid dynamical systems\n4 Logical Characterizations of Hybrid Systems\nBasic concept. Within a single specification and verification language, di \u000berential dy-\nnamic logic dL[9, 12, 14, 88] combines operational system models with means to talk\nabout the states that are reachable by system transitions. Di \u000berential dynamic logic dLis\na dynamic logic [94, 95] for hybrid systems. It combines first-order real arithmetic [96]\nwith first-order modal logic [97, 98] and dynamic logic [94, 95] generalized to hybrid sys-\ntems. (Nonlinear) real arithmetic is necessary for describing concepts like safe regions of\nthe state space and real-valued quantifiers are for quantifying over the possible values of\nsystem parameters or states.\nThe logic dLprovides parametrized modal operators [ \u000b] andh\u000bithat refer to the states\nreachable by hybrid program \u000band can be placed in front of any formula. The modal\noperators [\u000b] andh\u000birefer to all (modal operator [ \u000b]) or some (modal operator h\u000bi) state\nreachable by following HP \u000b. The formula [ \u000b]\u001eexpresses that all states reachable by\nhybrid program \u000bsatisfy formula \u001e. Likewise,h\u000bi\u001eexpresses that there is at least one\nstate reachable by \u000bfor which\u001eholds. These modalities can be used to express necessary\nor possible properties of the transition behavior of \u000bin a natural way. They can be nested\nor combined propositionally. The logic dLsupports quantifiers like 9p[\u000b]h\fi\u001ewhich\nexpresses that there is a choice of parameter p(expressed by9p) such that for all possible\nbehaviors of hybrid program \u000b(expressed by [ \u000b]) there is a reaction of hybrid program \f\n(i.e.,h\fi) that ensures \u001e. The logic dLis entirely flexible, so the parameter pthat is\nquantified in these formulas may appear in the hybrid programs \u000b;\fas a system parameter\nas well as in the formula \u001e, where it would then be a parameter in the postcondition.\nDefinition 4.1 (dLformula) .Theformulas of di \u000berential dynamic logic (dL) are defined\nby the grammar (where \u001e; aredLformulas,\u00121;\u00122terms, xa variable,\u000ba HP):\n\u001e; ::=\u00121=\u00122j\u00121\u0015\u00122j:\u001ej\u001e^ j8x\u001ej9x\u001ej[\u000b]\u001ejh\u000bi\u001e\nOperators>;\u0014;<;_;!;$can be defined as usual in classical logic, e.g., ( \u001e! )\u0011\n(:\u001e_ ). We use the notational convention that quantifiers and modal operators bind\nstrong, i.e., their scope only extends to the formula immediately after. Thus, [ \u000b]\u001e^ \u0011\n([\u000b]\u001e)^ and8x\u001e^ \u0011(8x\u001e)^ . In our notation, we also let :bind stronger than\n^, which binds stronger than _, which binds stronger than !;$. Thus,:A^B_C!\nD_E^F\u0011(((:A)^B)_C)!(D_(E^F)).\nAdLformula is valid if it is true in all states (as will be defined in Def. 4.3 below).\nOne common use case is the dLformula A![\u000b]B, which corresponds to a Hoare triple\n[99, 100], but for hybrid systems. It is valid if, for all states: if the dLformula Aholds (in\nthe initial state), then the dLformula Bholds for all states reachable by following the HP\n\u000b. That is, A![\u000b]Bis valid if Bholds in all states reachable by HP \u000bfrom initial states\nsatisfying A.\nExample 4.2 (Single car) .First, consider a very simple dLformula:\nv\u00150^A\u00150![a:=A;x0=v;v0=a]v\u00150\nThis dLformula expresses that, when, initially, the velocity vand maximal acceleration A\nare nonnegative, then all states reachable by the HP in the [ \u0001] modality have a nonnegative\nvelocity ( v\u00150). The HP first performs a discrete assignment a:=Asetting the acceler-\nation ato maximal acceleration A, and then, after the sequential composition (;), follows\nthe di \u000berential equation x0=v;v0=awhere the derivative of the position xis the velocity\n(x0=v) and the derivative of the velocity is the chosen acceleration a(v0=a). This dL\nformula is valid, because the velocity will never become negative when accelerating. It\ncould, however, become negative when choosing a negative acceleration a<0, which is\nwhat this simple dLformula does not allow.\nNext, consider the following dLformula, where car sdenotes the HP from (10) in\nExample 3.2 that always allows braking but acceleration only when \u001f\u0011v\u001420 holds:\nv\u00150^A\u00150^b>0![car s]v\u00150\nThis dLformula is trivially valid, simply because the postcondition v\u00150 is implied\nby both the precondition and by the evolution domain constraint of (10). Because the\ninvariant is (trivially) implied by the precondition, v\u00150 also holds initially. It is also\nimplied by the evolution domain constraint and the system has no runs that leave the\nevolution domain constraint. Note that this dLformula would not be valid, however, if we\nremoved the evolution domain constraint, because the controller would then be allowed\nnondeterministically to choose a negative acceleration ( a:=\u0000b) and stay in the continuous\nevolution arbitrarily long.\nSemantics. The meaning of di \u000berential dynamic logic is a suitable combination of the\nsemantics of first-order real arithmetic [96], first-order modal logic [97, 98], and dynamic\nlogic [94, 95]. The semantics defines, which formula \u001eis true in which state \u0017. We write\n\u0017j=\u001eif\u001eis true in state \u0017.\nDefinition 4.3 (dLsemantics) .Thesatisfaction relation \u0017j=\u001efordLformula\u001ein state\u0017\nis defined inductively and as usual in first-order modal logic (of real arithmetic):\n\u000f\u0017j=(\u00121=\u00122) i\u000b[ [\u00121] ]\u0017=[ [\u00122] ]\u0017\nThat is, an equation is true in a state \u0017i\u000bthe terms on both sides evaluate to the\nsame number.\n\u000f\u0017j=(\u00121\u0015\u00122) i\u000b[ [\u00121] ]\u0017\u0015[ [\u00122] ]\u0017\nThat is, a greater-or-equals inequality is true in a state \u0017i\u000bthe term on the left\nevaluate to a number that is greater or equal to the value of the right term.\n\u000f\u0017j=:\u001ei\u000bit is not the case that \u0017j=\u001e\nThat is, a negated formula :\u001eis true in state \u0017i\u000bthe formula \u001eitself is not true in \u0017.\n\u000f\u0017j=\u001e^ i\u000b\u0017j=\u001eand\u0017j= \nThat is, a conjunction is true in a state i \u000bboth conjuncts are true in said state.\n\u000f\u0017j=8x\u001ei\u000b\u0017d\nxj=\u001efor all d2R\nThat is, a universally quantified formula 8x\u001eis true in a state i \u000bits kernel\u001eis true\nin all variations of the state, no matter what real number dthe quantified variable x\nevaluates to in the variation \u0017d\nx.\n\u000f\u0017j=9x\u001ei\u000b\u0017d\nxj=\u001efor some d2R\nThat is, an existentially quantified formula 9x\u001eis true in a state i \u000bits kernel\u001eis\ntrue in some variation of the state, for a suitable real number dthat the quantified\nvariable xevaluates to in the variation \u0017d\nx.\n\u000f\u0017j=[\u000b]\u001ei\u000b!j=\u001efor all!with (\u0017;!)2\u001a(\u000b)\nThat is, a box modal formula [ \u000b]\u001eis true in state \u0017i\u000bpostcondition \u001eis true in all\nstates!that are reachable by running \u000bfrom\u0017.\n\u000f\u0017j=h\u000bi\u001ei\u000b!j=\u001efor some!with (\u0017;!)2\u001a(\u000b)\nThat is, a diamond modal formula h\u000bi\u001eis true in state \u0017i\u000bpostcondition \u001eis true in\nat least one state !that is reachable by running \u000bfrom\u0017.\nIf\u0017j=\u001e, then we say that \u001eis true at\u0017. AdLformula\u001eisvalid , written \u000f\u001e, i\u000b\u0017j=\u001efor\nall states\u0017.\nAxiomatization. Di\u000berential dynamic logic dLis not just a specification language but\nalso a verification language for hybrid systems. The logic dLcomes with an axiomati-\nzation in proof calculi, including a Gentzen-type sequent calculus suitable for automation\n[9] as well as a Hilbert-type calculus characterizing the logical essentials [14]. Using this\naxiomatization, interesting properties of hybrid systems can be verified by a proof from the\naxioms. The Hilbert-type axiomatization of di \u000berential dynamic logic [13, 14] is shown\nin Fig. 11. Here, we highlight a few rules and refer to prior work [13, 14] for a detailed\nexplanation of the axiomatization.\n[:=] [x:=\u0012]\u001e(x)$\u001e(\u0012)\n[?] [? H]\u001e$(H!\u001e)\n[0] [x0=\u0012]\u001e$8t\u00150 [x:=y(t)]\u001e (y0(t)=\u0012)\n[&] [ x0=\u0012&H]\u001e$8t0=x0[x0=\u0012]\u0000[x0=\u0000\u0012](x0\u0015t0!H)!\u001e\u0001\n[[] [\u000b[\f]\u001e$[\u000b]\u001e^[\f]\u001e\n[;] [\u000b;\f]\u001e$[\u000b][\f]\u001e\n[\u0003] [\u000b\u0003]\u001e$\u001e^[\u000b][\u000b\u0003]\u001e\nK [\u000b](\u001e! )!([\u000b]\u001e![\u000b] )\nI [\u000b\u0003](\u001e![\u000b]\u001e)!(\u001e![\u000b\u0003]\u001e)\nC [\u000b\u0003]8v>0 ('(v)!h\u000bi'(v\u00001))!8v('(v)!h\u000b\u0003i9v\u00140'(v)) ( v<\u000b)\nB8x[\u000b]\u001e![\u000b]8x\u001e (x<\u000b)\nV\u001e![\u000b]\u001e (FV(\u001e)\\BV(\u000b)=;)\nG\u001e\n[\u000b]\u001e\nFigure 11: Di \u000berential dynamic logic axiomatization\nWe write`\u001ei\u000bdLformula\u001ecan be proved with dLrules from dLaxioms (including\nfirst-order rules and axioms); see Fig. 11. That is, a dLformula is inductively defined to\nbeprovable in the dLcalculus if it is an instance of a dLaxiom or if it is the conclusion\n(below the rule bar) of an instance of one of the dLproof rules Gödel generalization G,\nmodus ponens,8-generalization, whose premises (above the rule bar) are all provable. The\ndLaxiomatization is sound and relatively complete [9, 14].\nIn axiom [0],y(\u0001) is the (unique [34, Theorem 10.VI]) solution of the symbolic initial-\nvalue problem y0(t)=\u0012;y(0)=x. Given such a solution y(\u0001), continuous evolution along\nthat di \u000berential equation can be replaced by a discrete assignment x:=y(t) with an addi-\ntional quantifier for the evolution time t. It goes without saying that variables like tare\nfresh in Fig. 11. Notice that conventional initial-value problems are numerical with con-\ncrete numbers x2Rdas initial values, not symbols x[34]. This would not be enough for\nour purpose, because we need to consider all states in which the system could start, which\nmay be uncountably many. That is why axiom [0] solves one symbolic initial-value prob-\nlem, because we could hardly solve uncountable many numerical initial-value problems.\nThe side condition that y(\u0001) is, indeed, a solution of the symbolic initial-value problem is\ndecidable for simple solutions (such as polynomials). For more complicated di \u000berential\nequations, di \u000berential invariants and related techniques [10, 101, 102] are used to prove\nproperties of di \u000berential equations by induction.\nSequential compositions are proven using nested modalities in axiom [;]. From right\nto left: If, after all \u000b-runs, all\f-runs lead to states satisfying \u001e(i.e., [\u000b][\f]\u001eholds), then\nalso all runs of the sequential composition \u000b;\flead to states satisfying \u001e(i.e., [\u000b;\f]\u001e\nholds). The converse implication uses the fact that if after all \u000b-run all\f-runs lead to \u001e\n(i.e., [\u000b][\f]\u001e), then all runs of \u000b;\flead to\u001e(that is, [\u000b;\f]\u001e), because the runs of \u000b;\f\nare exactly those that first do any \u000b-run, followed by any \f-run. Again, it is crucial that\ndLis a full logic that considers reachability statements as modal operators, which can be\nnested, for then both sides in [;] are dLformulas again (unlike in Hoare logic [100], where\nintermediate assertions need to be guessed or computed as weakest preconditions for \fand\n\u001e). Note that dLcan directly express weakest preconditions, because the dLformula [\f]\u001e\nor any formula equivalent to it already is the weakest precondition for \fand\u001e. Strongest\npostconditions are expressible in dLas well.\nAxiom I is an induction schema for repetitions. Axiom I says that, if, after any\nnumber of repetitions of \u000b, invariant\u001eremains true after one (more) iteration of \u000b(i.e.,\n[\u000b\u0003](\u001e![\u000b]\u001e)), then\u001eholds after any number of repetitions of \u000b(i.e., [\u000b\u0003]\u001e) if\u001eholds\ninitially. That is, if \u001eis true after running \u000bwhenever\u001ehas been true before, then, if \u001e\nholds in the beginning, \u001ewill continue to hold, no matter how often we repeat \u000bin [\u000b\u0003]\u001e.\nThe dLaxiomatization in Fig. 11 uses a modular axiom [&] that reduces di \u000beren-\ntial equations with evolution domain constraints to di \u000berential equations without them by\nchecking the evolution domain constraint backwards along the reverse flow. It checks H\nbackwards from the end of the evolution up to the initial time t0, using that x0=\u0000\u0012follows\nthe same flow as x0=\u0012, but backwards. See prior work for an elaboration and more details\n[13].\n5 Hybrid Relations between Discrete and Continuous Dy-\nnamical Systems\nDiscrete dynamical systems and continuous dynamical systems start out on quite di \u000ber-\nent premises, emphasizing step-wise discrete successions of change (Section 2.2) versus\nsmooth or continuous forms of change (Section 2.3), respectively. That makes discrete\nand continuous dynamical systems and, thus, discrete and continuous computation, ap-\npear to be fundamentally and characteristically di \u000berent. In fact, this di \u000berence was one\nimportant original motivation for inventing hybrid systems in the first place (Section 2.4)\nas a way of describing how two independent and di \u000berent sources of dynamical behav-\nior combine [2, 51, 103]; we refer to the literature for a review of the history of hybrid\nsystems [104]. Of course, this also makes analysis questions of hybrid systems highly\nundecidable (not even semidecidable) and hybrid systems logics necessarily incomplete,\nbecause they combine two independent sources of incompleteness [9], the discrete and the\ncontinuous. Each of those sources of incompleteness follow by a simple corollary [9] to\nGödel’s incompleteness theorem [105].\nSurprisingly, however, it turns out that discrete and continuous dynamics are not even\nquite so unrelated [9, 14]. For example, it has been shown that three-dimensional di \u000ber-\nential equations [106, 107] can simulate universal Turing machines on the relevant grid\npoints. In an extended sense with approximation and robustness, so can polynomial di \u000ber-\nential equations [108]. The basic observations making these results happen is that Turing\nmachines only take on values on a grid in time and space. That is, as discrete dynamical\nsystems, they produce state change at a certain rate, say, 1 computation step per second,\nsince T=ZorT=N. They also only take on state values from a discrete set, say T=Zd.\nIn a nutshell, continuous dynamical systems can be made to agree with the intended com-\nputations of a classical discrete Turing machine on a discrete grid, say Zd, that is chosen\nto correspond to the discrete states of the discrete dynamical system of a classical Turing\nmachine. At the values o \u000bthe grid, the continuous dynamical system can take on any\nvalue to continuously move from the previous state at time n2Nto the next state at time\nn+1. Conversely, computability results for solutions of di \u000berential equations hold on\nopen sets under existence and uniqueness assumptions and when rational interval approx-\nimations are given [109], which are necessary assumptions [31]. This result is based on\nenumerating all tubes around solutions and checking whether a tube covers the solution\nwith the required accuracy.\nWhat about general discrete dynamical systems, which, like Turing machines, have\na discrete time domain T=N, but, unlike classical Turing machines, can compute on a\ndense continuous state space X=Rdrather than on a discrete X=Zdor even finite state\nspaceX=f0;1gdlike Turing machines do? In that case, the relevant states are the dense\nsetRd, not just the grid Zdwith arbitrary values o \u000bgrid, i.e., on RdnZd. Can discrete\ndynamical systems be simulated in some sense by continuous dynamical systems?\nAnd what about hybrid systems? Hybrid systems mix discrete and continuous dynam-\nics. Can their mixed discrete and continuous behavior be captured in some way using\ncontinuous dynamics alone? What if its behavior consists of some fixed finite number of\ndiscrete and continuous transitions? What if the hybrid system performs an arbitrary un-\nknown number of repetitions of interactions of discrete and continuous dynamics like they\nusually do?\nWhat about the other way around? Since at least some discrete dynamical systems\nlike Turing machines can be emulated in continuous systems, can continuous systems also\nsomehow be characterized in discrete systems?\nNaïve ways of relating discrete and continuous dynamical systems are bound to fail. It\nis, for example, not generally the case that a property Ftransfers from a continuous system\nto its Euler discretization, nor vice versa. That is, neither the following equivalence nor\nthe left-to-right implication nor the right-to-left implication generally holds:\n[x0=\u0012]F?$[(x:=x+h\u0012)\u0003]F (12)\nThis formula would relate a property Fof a continuous dynamical system x0=\u0012to prop-\nerty Fof its Euler discretization ( x:=x+h\u0012)\u0003with discretization step size h>0if only\nit were true . Unfortunately, as such, the formula is not generally valid. Fig. 12 illustrates\na counterexample to formula (12) from prior work [14], to which we refer for further de-\ntails. The error of the Euler discretization grows quickly compared to the true solution\nin Fig. 12. For example, F\u0011(x2+y2=1) is an invariant of the true solution but not its\napproximation. On the bright side, the error can be smaller for some (not all) smaller\ndiscretization steps hand the error is quite reasonable for a certain period of time.\nThese aspects are one corner stone for a complete logical alignment of discrete and\ncontinuous dynamics using constructive proof-theoretical techniques [14]. The key to un-\nderstanding how discrete and continuous dynamics relate is via their joint generalization as\nhybrid systems in their logical characterizations as fragments of di \u000berential dynamic logic\n[14]. Hybrid systems have been aligned with both continuous dynamical systems [9] and\nwith discrete dynamical systems [14] by constructive completeness arguments showing\nthat all valid properties of hybrid systems are provable in the dLaxiomatization from ele-\nmentary properties of continuous systems to which they reduce constructively and likewise\nfor discrete systems [9, 14]. Since every discrete system is a hybrid system and every con-\ntinuous system also is a hybrid system, these two reductions mutually align discrete and\ncontinuous systems with one another [14, 110]. That is, discrete and continuous systems\ncan be related to one another indirectly after embedding both into the joint generaliza-\ntion of hybrid systems and then analyzing how hybrid systems relate to their fragments;\n/Minus5 510x\n/Minus5510y\n24681012t\n/Minus5510x yFigure 12: (left) Dark circle shows true solution, light line segments show Euler approxi-\nmation for discretization step h=1\n2(right) Dark true bounded trigonometric solution and\nEuler approximation in lighter colors with increasing errors over time t\ncf. Fig. 1.\nFrom Hybrid to Continuous. Using the proof calculus of dL, the problem of prov-\ning properties of hybrid systems reduces completely to proving properties of elementary\ncontinuous systems [9].\nTheorem 5.1 (Continuous relative completeness of dL[9, 14]) .ThedLcalculus is a sound\nand complete axiomatization of hybrid systems relative to di \u000berential equations, i.e., every\nvalid dLformula can be derived from elementary properties of di \u000berential equations.\nIn particular, if we want to prove properties of hybrid systems, all we need to do is\nto prove properties of continuous systems, because the dLcalculus completely handles\nall other steps in the proofs that deal with discrete or hybrid systems. Of course, one has\nto be able to handle continuous systems in order to understand hybrid systems, because\ncontinuous systems are a special case of hybrid systems. But it turns out that this is\nactually all that one needs in order to verify hybrid systems, because the dLproof calculus\ncompletely axiomatizes all the rest of hybrid systems.\nSince the proof of Theorem 5.1 is constructive, there is even a complete constructive\nreduction of properties of hybrid systems to corresponding properties of continuous sys-\ntems. The dLcalculus can prove hybrid systems properties exactly as good as properties\nof the corresponding continuous systems can be verified. One important step in the proof\nof Theorem 5.1 shows that all required invariants and variants for repetitions can be ex-\npressed in the logic dL. Furthermore, the dLcalculus defines a decision procedure for dL\nsentences (i.e., closed formulas) relative to an oracle for di \u000berential equations [14].\nThis result implies that the continuous dynamics dominates the discrete dynamics\nsince, once the continuous dynamics is handled, all discrete and hybrid dynamics can\nbe handled as well. Therefore, verification of hybrid systems is not more complex than\nthe verification of continuous systems. In particular, discrete systems verification is not\nmore complex than the verification of continuous systems. This is reassuring, because we\nget the challenges of discrete dynamics solved for free (by the dLproof calculus) once\nwe address continuous dynamics. In addition to its theoretical alignment of the landscape\nof complexity and reductions, this result emphasizes the importance of studying verifica-\ntion techniques for continuous systems, because the dLcalculus makes those techniques\nhybrid.\nFrom Hybrid to Discrete. In a certain sense, it may appear to be more complicated\nto handle continuous dynamics than discrete dynamics. If the continuous dynamics are\nnot just subsuming discrete dynamics but if they were “inherently more”, then one might\nwonder whether hybrid systems verification could be understood with a discrete dynamical\nsystem like a classical computer at all. Of course, such a naïve consideration would be\nquite insu \u000ecient, because, e.g., properties of objects in uncountable continuous spaces can\nvery well follow from properties of finitary discrete objects. Finite dLproof objects, for\nexample, already entail properties about uncountable continuous state spaces of systems.\nFortunately, all such worries about the insu \u000eciency of discrete ways of understanding\ncontinuous phenomena can be settled once and for all by studying the proof-theoretical\nrelationship between discrete and continuous dynamics. We have shown not only that\nthe axiomatization of dLis complete relative to di \u000berential equations, but that it is also\ncomplete relative discrete systems [14].\nTheorem 5.2 (Discrete relative completeness of dL[14]) .The dLcalculus is a sound and\ncomplete axiomatization of hybrid systems relative to discrete systems, i.e., every valid dL\nformula can be derived from elementary properties of discrete systems.\nThus, the dLcalculus can also prove properties of hybrid systems exactly as good as\nproperties of discrete systems can be proved. Again, the proof of Theorem 5.2 is construc-\ntive, entailing that there is a constructive way of reducing properties of hybrid systems to\nproperties of discrete systems using the dLcalculus. Furthermore, the dLcalculus de-\nfines a decision procedure for dLsentences relative to an oracle for discrete systems [14].\nTheorems 5.1 and 5.2 lead to a surprising result aligning discrete and continuous systems\nproperties.\nTheorem 5.3 (dLequi-expressibility [14]) .The logic dLisexpressible in both its discrete\nand in its continuous fragment: for each dLformula\u001ethere is a continuous formula \u001e[that\nis equivalent, i.e., \u000f\u001e$\u001e[and a discrete formula \u001e#that is equivalent, i.e., \u000f\u001e$\u001e#.\nThe converse holds trivially. Furthermore, the construction of \u001e[and\u001e#is e\u000bective (and\nthe equivalences are provable in the dLcalculus).\nThe proof of the surprising result Theorem 5.3 is constructive but rather nontrivial\n(some 20 pages). It uses a combination of Euler discretizations leading to “proof-uniform”\napproximations based on the existence of (not the values of) on-the-fly local Lipschitz\nbounds together with topological arguments on semi-algebraic base sets relating sets to\nquantified open neighborhoods and logical liftings using the Barcan axiom as well as real\npairings by di \u000berential equations and relations between modalities and quantifiers. The\nusual challenges of evolution domain constraints are handled based on the “there and back\nagain” axiom [&]. While several more e \u000ecient shortcuts exist, the overall proof is opti-\nmized for simplicity of the proof not for e \u000eciency of the result, so it adds unnecessary\ncomplexity. But the proof also identifies cases, in which significantly more e \u000ecient re-\nductions are possible, such as in the case of proving closed properties of open invariants.\nWhatever the added complexity may be, Theorem 5.3 does have interesting fundamental\nconsequences.\nConsequently, all hybrid questions (and, thus, also all discrete questions) can be formu-\nlated constructively equivalently as purely continuous questions and all hybrid questions\n(also all continuous questions) can be formulated constructively equivalently as purely\ndiscrete questions. There is a constructive and provable reduction from either side to the\nother.\nAs a corollary to Theorems 5.1 and 5.2, we can proof-theoretically and constructively\nequate\nhybrid =continuous =discrete\nby a complete logical alignment in the sense that proving properties of either of those\nclasses of dynamical systems is the same as proving properties of any other of those\nclasses, because all properties of one system can be provably reduced in a complete, con-\nstructive, and equivalent way to any of the other system classes. Even though each kind\nof dynamics comes from fundamentally di \u000berent principles, they all meet in terms of their\nproof problems being interreducible, even constructively; recall Fig. 1. The proof prob-\nlem of hybrid systems, the proof problem of continuous systems, and the proof problem\nof discrete systems are, thus, equivalent. Any proof technique for one of these classes of\nsystems completely lifts to proof techniques for the other class of systems.\nSince the proof problems interreduce constructively, every technique that is success-\nful for one kind of dynamics lifts to the other kind of dynamics through the dLcalculus\nin a provably perfect way. Induction, for example, is the primary technique for proving\nproperties of discrete systems. Hence, by Theorem 5.2, there is a corresponding induction\ntechnique for continuous systems and for hybrid systems. And, indeed, di\u000berential invari-\nants [10, 101] are such an induction technique for di \u000berential equations that has been used\nvery successfully for verifying hybrid systems with more advanced di \u000berential equations\n[12, 111–115]. In fact, di \u000berential invariants had already been introduced in 2008 [10]\nbefore Theorem 5.2 was proved [14], but Theorem 5.2 implies that a di \u000berential invariant\ninduction technique has to exist. These results also show that there are sound ways of\nusing discretization for di \u000berential equations [14] and that numerical integration schemes\nlike, e.g., Euler’s method or more elaborate methods can be used for hybrid systems veri-\nfication, which is not at all clear a priori due to inherent numerical approximation errors,\nwhich may blur decisions either way [31].\nSome ways of doing practical proof search and generation of invariants has been ad-\ndressed in previous work [111, 112]. But many other proof search procedures could be\nuseful to generate invariants more e \u000eciently in practice. Such advances include, for ex-\nample, techniques using the di \u000berential radical invariants extension of di \u000berential invari-\nants [116] as well as combinations of di \u000berential invariants with Lie invariants [117] using\ndi\u000berential cuts [10, 101]. Di \u000berential radical invariants provide a decision procedure for\nalgebraic invariants of algebraic di \u000berential equations and a corresponding automatic in-\nvariant generation technique based on symbolic linear algebra [116]. Di \u000berential cuts,\ninstead, generalize Gentzen’s cut to di \u000berential equations but are fundamental, because\nthey do not admit di \u000berential cut elimination [10, 101].\n6 Conclusions and Future Work\nThis article gave a light-weight overview of analog and hybrid computing models from\na dynamical systems perspective, with a tour of discrete dynamical systems, continuous\ndynamical systems, and their common generalization as hybrid (dynamical) systems, cul-\nminating in a logic and programming languages view of dynamical systems. The focus\nin this article was on an exposition of the basic principles and ideas. Deeper levels of\nsophistication are reserved for more in-depth expositions [12–14, 20]. The primary per-\nspective here was on identifying and relating some surprising commonalities of discrete\nand continuous dynamics using the characterization of hybrid systems in di \u000berential dy-\nnamic logic [9, 12–14]. More consequences of the complete proof theoretical alignment\nare discussed in previous work [14]. We also remark that the approach shown in this paper\ngeneralizes to distributed hybrid systems [65], stochastic hybrid systems [76], and hybrid\ngames [118].\nThe study of the relations of discrete and continuous systems is not only very exciting\nbut also results in surprising relations [9, 13, 14, 20, 44, 106–108], bringing up many\ninteresting questions for future work. We highlight that the complete alignments readily\nidentify important cases for which the complexity is lower than what the constructive\nreductions use [14]. The reason is that the constructive proofs are optimized for simplicity\nnot e\u000eciency. This raises the question of the inherent complexity of the reductions. What\nis the lowest complexity achievable in which case?\nAcknowledgments\nWe thank Gilles Dowek, Nachum Dershowitz, Olivier Bournez, Daniel Graça, and Yuri\nGurevich for helpful feedback on this article.\nThis material is based upon work supported by the National Science Foundation un-\nder NSF CAREER Award CNS-1054246, NSF EXPEDITION CNS-0926181, and under\nGrant No. CNS-0931985, by DARPA under agreement number FA8750-12-2-0291, by\nthe Army Research O \u000ece under Award No. W911NF-09-1-0273, by University Trans-\nportation Center program grant funds from the U.S. Department of Transportation, and by\nthe German Research Council (DFG) as part of the Transregional Collaborative Research\nCenter “Automatic Verification and Analysis of Complex Systems” (SFB /TR 14 A V ACS).\nReferences\n[1] Laurent Doyen, Goran Frehse, George J. Pappas, and André Platzer. Verification of\nhybrid systems. In Edmund M. Clarke, Thomas A. Henzinger, and Helmut Veith,\neditors, Handbook of Model Checking , chapter 28. Springer, 2015.\n[2] Anil Nerode and Wolf Kohn. Models for hybrid systems: Automata, topologies,\ncontrollability, observability. In Grossman et al. [119], pages 317–356. ISBN 3-\n540-57318-6.\n[3] Rajeev Alur, Costas Courcoubetis, Nicolas Halbwachs, Thomas A. Henzinger, Pei-\nHsin Ho, Xavier Nicollin, Alfredo Olivero, Joseph Sifakis, and Sergio Yovine. The\nalgorithmic analysis of hybrid systems. Theor. Comput. Sci. , 138(1):3–34, 1995.\n[4] Michael S. Branicky. General hybrid dynamical systems: Modeling, analysis, and\ncontrol. In Rajeev Alur, Thomas A. Henzinger, and Eduardo D. Sontag, editors,\nHybrid Systems , volume 1066 of LNCS , pages 186–200. Springer, 1995. ISBN\n3-540-61155-X.\n[5] Thomas A. Henzinger. The theory of hybrid automata. In LICS , pages 278–292,\nLos Alamitos, 1996. IEEE Computer Society. doi:10.1109 /LICS.1996.561342.\n[6] Michael S. Branicky, Vivek S. Borkar, and Sanjoy K. Mitter. A unified framework\nfor hybrid control: Model and optimal control theory. IEEE T. Automat. Contr. , 43\n(1):31–45, 1998.\n[7] Jennifer M. Davoren and Anil Nerode. Logics for hybrid systems. IEEE , 88(7):\n985–1010, July 2000.\n[8] Rajeev Alur, Thomas Henzinger, Gerardo La \u000berriere, and George J. Pappas. Dis-\ncrete abstractions of hybrid systems. Proc. IEEE , 88(7):971–984, 2000.\n[9] André Platzer. Di \u000berential dynamic logic for hybrid systems. J. Autom. Reas. , 41\n(2):143–189, 2008. ISSN 0168-7433. doi:10.1007 /s10817-008-9103-8.\n[10] André Platzer. Di \u000berential-algebraic dynamic logic for di \u000berential-algebraic\nprograms. J. Log. Comput. , 20(1):309–352, 2010. ISSN 0955-792X.\ndoi:10.1093 /logcom /exn070.\n[11] André Platzer. Di\u000berential Dynamic Logics: Automated Theorem Proving for Hy-\nbrid Systems . PhD thesis, Department of Computing Science, University of Olden-\nburg, Dec 2008. Appeared with Springer.\n[12] André Platzer. Logical Analysis of Hybrid Systems: Proving Theorems for\nComplex Dynamics . Springer, Heidelberg, 2010. ISBN 978-3-642-14508-7.\ndoi:10.1007 /978-3-642-14509-4.\n[13] André Platzer. Logics of dynamical systems. In LICS [120], pages 13–24. ISBN\n978-1-4673-2263-8. doi:10.1109 /LICS.2012.13.\n[14] André Platzer. The complete proof theory of hybrid systems. In LICS [120], pages\n541–550. ISBN 978-1-4673-2263-8. doi:10.1109 /LICS.2012.64.\n[15] Thomas A. Henzinger, Pei-Hsin Ho, and Howard Wong-Toi. HyTech: The next\ngeneration. In IEEE Real-Time Systems Symposium , pages 56–65, 1995.\n[16] Stefan Ratschan and Zhikun She. Safety verification of hybrid systems by constraint\npropagation-based abstraction refinement. Trans. on Embedded Computing Sys. , 6\n(1):8, 2007. ISSN 1539-9087.\n[17] André Platzer and Jan-David Quesel. KeYmaera: A hybrid theorem prover for\nhybrid systems. In Alessandro Armando, Peter Baumgartner, and Gilles Dowek,\neditors, IJCAR , volume 5195 of LNCS , pages 171–178. Springer, 2008. ISBN 978-\n3-540-71069-1. doi:10.1007 /978-3-540-71070-7_15.\n[18] Goran Frehse, Colas Le Guernic, Alexandre Donzé, Scott Cotton, Rajarshi Ray,\nOlivier Lebeltel, Rodolfo Ripado, Antoine Girard, Thao Dang, and Oded Maler.\nSpaceEx: Scalable verification of hybrid systems. In Ganesh Gopalakrishnan and\nShaz Qadeer, editors, CAV, volume 6806 of LNCS , pages 379–395. Springer, 2011.\nISBN 978-3-642-22109-5.\n[19] Xin Chen, Erika Ábrahám, and Sriram Sankaranarayanan. Flow*: An analyzer for\nnon-linear hybrid systems. In Natasha Sharygina and Helmut Veith, editors, CAV,\nvolume 8044 of LNCS , pages 258–263. Springer, 2013. ISBN 978-3-642-39798-1.\ndoi:10.1007 /978-3-642-39799-8_18.\n[20] André Platzer. Dynamic logics of dynamical systems. CoRR , abs/1205.4788, 2012.\n[21] Henri Poincaré. Sur les courbes définies par une équation di \u000bérentielle. Oeuvres ,\n1, 1892. Paris.\n[22] Morris W. Hirsch, Stephen Smale, and Robert L. Devaney. Di\u000berential Equations,\nDynamical Systems, and an Introduction to Chaos . Academic Press, 2 edition,\n2003.\n[23] Oded Galor. Discrete Dynamical Systems . Springer, 2010.\n[24] Edmund M. Clarke, Orna Grumberg, and Doron A. Peled. Model Checking . MIT\nPress, Cambridge, MA, USA, 1999. ISBN 0-262-03270-8.\n[25] Christel Baier, Joost-Pieter Katoen, and Kim Guldstrand Larsen. Principles of\nModel Checking . MIT Press, 2008. ISBN 978-0262026499.\n[26] Krzysztof R. Apt, Frank S. de Boer, and Ernst-Rüdiger Olderog. Verification of\nSequential and Concurrent Programs . Springer, 3rd edition, 2010.\n[27] Lenore Blum, Felipe Cucker, Michael Shub, and Steve Smale. Complexity and Real\nComputation . Springer, 1998. ISBN 0-387-98281-7.\n[28] Marian Boykan Pour-El and Ian Richards. Computability in Analysis and Physics .\nSpringer, 1989.\n[29] Klaus Weihrauch. Computable Analysis . Springer, 2005. ISBN 978-3-540-26179-\n7.\n[30] Michael O. Rabin and Dana Scott. Finite automata and their decision problems.\nIBM Journal of Research and Development , 3(2):114–125, 1959.\n[31] André Platzer and Edmund M. Clarke. The image computation problem in hybrid\nsystems model checking. In Alberto Bemporad, Antonio Bicchi, and Giorgio But-\ntazzo, editors, HSCC , volume 4416 of LNCS , pages 473–486. Springer, 2007. ISBN\n978-3-540-71492-7. doi:10.1007 /978-3-540-71493-4_37.\n[32] Lawrence Perko. Di\u000berential equations and dynamical systems . Springer, New\nYork, 3 edition, 2006. ISBN 978-0387951164.\n[33] Akitoshi Kawamura and Stephen A. Cook. Complexity theory for operators in\nanalysis. In Leonard J. Schulman, editor, STOC , pages 495–502. ACM, 2010. ISBN\n978-1-4503-0050-6. doi:10.1145 /1806689.1806758.\n[34] Wolfgang Walter. Ordinary Di \u000berential Equations . Springer, 1998. ISBN 978-\n0387984599.\n[35] Olivier Bournez, Manuel Lameiras Campagnolo, Daniel S. Graça, and Emmanuel\nHainry. Polynomial di \u000berential equations compute all real computable functions on\ncomputable compact intervals. Journal of Complexity , 23:317–335, 2007.\n[36] Claude Elwood Shannon. Mathematical theory of the di \u000berential analyzer. J. Math.\nPhys. , 20:337–354, 1941.\n[37] V . Bush. The di \u000berential analyzer. a new machine for solving di \u000berential equations.\nJournal of the Franklin Institute , 212(4):447 – 488, 1931. ISSN 0016-0032.\n[38] Daniel Silva Graça and José Félix Costa. Analog computers and recursive functions\nover the reals. J. Complexity , 19(5):644–664, 2003.\n[39] Cristopher Moore. Recursion theory on the reals and continuous-time computation.\nTheor. Comput. Sci. , 162(1):23–44, 1996. doi:10.1016 /0304-3975(95)00248-0.\n[40] Thomas Chadzelek and Günter Hotz. Analytic machines. Theor. Comput. Sci. , 219\n(1-2):151–167, 1999.\n[41] Alexander Moshe Rabinovich. Automata over continuous time. Theor. Comput.\nSci., 300(1-3):331–363, 2003. doi:10.1016 /S0304-3975(02)00331-6.\n[42] Boris A. Trakhtenbrot. Automata, circuits, and hybrids: Facets of continuous time.\nIn Fernando Orejas, Paul G. Spirakis, and Jan van Leeuwen, editors, ICALP , volume\n2076 of LNCS , pages 4–23. Springer, 2001. ISBN 3-540-42287-0. doi:10.1007 /3-\n540-48224-5_2.\n[43] Otomar Hájek. Discontinuous di \u000berential equations, I. Journal of Di \u000beren-\ntial Equations , 32(2):149 – 170, 1979. ISSN 0022-0396. doi:10.1016 /0022-\n0396(79)90056-1.\n[44] Gilles Dowek. The physical Church–Turing thesis and non-deterministic compu-\ntation over the real numbers. Philosophical Transactions of the Royal Society A:\nMathematical, Physical and Engineering Sciences , 370(1971):3349–3358, 2012.\ndoi:10.1098 /rsta.2011.0322.\n[45] Jean-Pierre Aubin and Arrigo Cellina. Di\u000berential Inclusions: Set-Valued Maps\nand Viability Theory . Springer, 1984.\n[46] Sarah M. Loos, André Platzer, and Ligia Nistor. Adaptive cruise control: Hybrid,\ndistributed, and now formally verified. In Michael Butler and Wolfram Schulte,\neditors, FM, volume 6664 of LNCS , pages 42–56. Springer, 2011. ISBN 978-3-\n642-21436-3. doi:10.1007 /978-3-642-21437-0_6.\n[47] Rafal Goebel, Ricardo G. Sanfelice, and Andrew R. Teel. Hybrid dynamical sys-\ntems. IEEE Control Systems Magazine , 29(2):28–93, 2009.\n[48] Rod Cross. The coe \u000ecient of restitution for collisions of happy balls, unhappy balls,\nand tennis balls. Am. J. Phys. , 68(11):1025–1031, 2000. doi:10.1119 /1.1285945.\n[49] André Platzer. A complete axiomatization of di \u000berential game logic for hy-\nbrid games. Technical Report CMU-CS-13-100R, School of Computer Science,\nCarnegie Mellon University, Pittsburgh, PA, January, Revised and extended in July\n2013.\n[50] Anil Nerode and Wolf Kohn. Models for hybrid systems: Automata, topologies,\ncontrollability, observability. In Hybrid Systems , pages 317–356, London, UK, UK,\n1993. Springer-Verlag. ISBN 3-540-57318-6.\n[51] Rajeev Alur, Costas Courcoubetis, Thomas A. Henzinger, and Pei-Hsin Ho. Hybrid\nautomata: An algorithmic approach to the specification and verification of hybrid\nsystems. In Grossman et al. [119], pages 209–229. ISBN 3-540-57318-6.\n[52] Xavier Nicollin, Alfredo Olivero, Joseph Sifakis, and Sergio Yovine. An approach\nto the description and analysis of hybrid systems. In Grossman et al. [119], pages\n149–178. ISBN 3-540-57318-6. doi:10.1007 /3-540-57318-6_28.\n[53] Lucio Tavernini. Di \u000berential automata and their discrete simulators. Non-Linear\nAnal. , 11(6):665–683, 1987. ISSN 0362-546X.\n[54] Jan A. Bergstra and C. A. Middelburg. Process algebra for hybrid systems. Theor.\nComput. Sci. , 335(2-3):215–280, 2005.\n[55] D. A. van Beek, Ka L. Man, Michel A. Reniers, J. E. Rooda, and Ramon R. H.\nSchi\u000belers. Syntax and consistent equation semantics of hybrid Chi. J. Log. Algebr.\nProgram. , 68(1-2):129–210, 2006.\n[56] René David and Hassane Alla. On hybrid petri nets. Discrete Event Dynamic\nSystems , 11(1-2):9–40, 2001. doi:10.1023 /A:1008330914786.\n[57] Franck Cassez and Kim Guldstrand Larsen. The impressive power of stopwatches.\nInCONCUR , pages 138–152, 2000.\n[58] Goran Frehse. PHA Ver: algorithmic verification of hybrid systems past HyTech.\nSTTT , 10(3):263–279, 2008.\n[59] Akash Deshpande, Aleks Göllü, and Pravin Varaiya. SHIFT: A formalism and a\nprogramming language for dynamic networks of hybrid automata. In Antsaklis\net al. [121], pages 113–133. ISBN 3-540-63358-8.\n[60] William C. Rounds. A spatial logic for the hybrid \u0019-calculus. In Rajeev Alur and\nGeorge J. Pappas, editors, HSCC , volume 2993 of LNCS , pages 508–522. Springer,\n2004. ISBN 3-540-21259-0. doi:10.1007 /978-3-540-24743-2_34.\n[61] Fabian Kratz, Oleg Sokolsky, George J. Pappas, and Insup Lee. R-Charon, a mod-\neling language for reconfigurable hybrid systems. In Hespanha and Tiwari [122],\npages 392–406. ISBN 3-540-33170-0.\n[62] José Meseguer and Raman Sharykin. Specification and analysis of distributed\nobject-based stochastic hybrid systems. In Hespanha and Tiwari [122], pages 460–\n475. ISBN 3-540-33170-0.\n[63] Seth Gilbert, Nancy Lynch, Sayan Mitra, and Tina Nolte. Self-stabilizing robot\nformations over unreliable networks. ACM Trans. Auton. Adapt. Syst. , 4(3):1–29,\n2009. ISSN 1556-4665.\n[64] André Platzer. Quantified di \u000berential dynamic logic for distributed hybrid systems.\nIn Anuj Dawar and Helmut Veith, editors, CSL, volume 6247 of LNCS , pages 469–\n483. Springer, 2010. ISBN 978-3-642-15204-7. doi:10.1007 /978-3-642-15205-\n4_36.\n[65] André Platzer. A complete axiomatization of quantified di \u000berential dynamic logic\nfor distributed hybrid systems. Logical Methods in Computer Science , 8(4):1–44,\n2012. doi:10.2168 /LMCS-8(4:17)2012. Special issue for selected papers from\nCSL’10.\n[66] Taylor T. Johnson and Sayan Mitra. A small model theorem for rectangular hybrid\nautomata networks. In Holger Giese and Grigore Rosu, editors, FORTE /FMOODS ,\nLNCS. Springer, 2012. To appear.\n[67] Nancy Lynch. Distributed Algorithms . Morgan Kaufmann, 1996.\n[68] Paul C. Attie and Nancy A. Lynch. Dynamic input /output automata: A formal\nmodel for dynamic systems. In Kim Guldstrand Larsen and Mogens Nielsen, ed-\nitors, CONCUR , volume 2154 of LNCS , pages 137–151. Springer, 2001. ISBN\n3-540-42497-0.\n[69] Mark H. A. Davis. Piecewise-deterministic Markov processes: A general class of\nnon-di \u000busion stochastic models. Journal of the Royal Statistical Society. Series B ,\n46(3):358–388, 1984.\n[70] Mrinal K. Ghosh, Aristotle Arapostathis, and Steven I. Marcus. Ergodic control\nof switching di \u000busions. SIAM J. Control Optim. , 35(6):1952–1988, 1997. ISSN\n0363-0129.\n[71] Jianghai Hu, John Lygeros, and Shankar Sastry. Towards a theory of stochastic\nhybrid systems. In Nancy A. Lynch and Bruce H. Krogh, editors, HSCC , volume\n1790 of LNCS , pages 160–173. Springer, 2000. ISBN 3-540-67259-1.\n[72] Manuela L. Bujorianu and John Lygeros. Towards a general theory of stochastic\nhybrid systems. In Henk A. P. Blom and John Lygeros, editors, Stochastic Hybrid\nSystems: Theory and Safety Critical Applications , volume 337 of Lecture Notes\nContr. Inf. , pages 3–30. Springer, 2006.\n[73] Christos G. Cassandras and John Lygeros, editors. Stochastic Hybrid Systems . CRC,\n2006. ISBN 978-0849390838.\n[74] Xenofon D. Koutsoukos and Derek Riley. Computational methods for verification\nof stochastic hybrid systems. IEEE T. Syst. Man, Cy. A , 38(2):385–396, 2008.\n[75] Martin Fränzle, Tino Teige, and Andreas Eggers. Engineering constraint solvers for\nautomatic analysis of probabilistic hybrid automata. J. Log. Algebr. Program. , 79\n(7):436–466, 2010.\n[76] André Platzer. Stochastic di \u000berential dynamic logic for stochastic hybrid pro-\ngrams. In Nikolaj Bjørner and Viorica Sofronie-Stokkermans, editors, CADE , vol-\nume 6803 of LNCS , pages 431–445. Springer, 2011. ISBN 978-3-642-22437-9.\ndoi:10.1007 /978-3-642-22438-6_34.\n[77] Ioannis Karatzas and Steven Shreve. Brownian Motion and Stochastic Calculus .\nGraduate Texts in Mathematics. Springer, 1991. ISBN 978-0387976556.\n[78] Bernt Øksendal. Stochastic Di \u000berential Equations: An Introduction with Applica-\ntions . Springer, 2007. ISBN 978-3540047582.\n[79] Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Di \u000berential\nEquations . Springer, New York, 2010. ISBN 978-3642081071.\n[80] Anil Nerode, Je \u000brey B. Remmel, and Alexander Yakhnis. Hybrid system games:\nExtraction of control automata with small topologies. In Antsaklis et al. [121],\npages 248–293. ISBN 3-540-63358-8. doi:10.1007 /BFb0031565.\n[81] Claire Tomlin, George J. Pappas, and Shankar Sastry. Conflict resolution for air traf-\nfic management: a study in multi-agent hybrid systems. IEEE T. Automat. Contr. ,\n43(4):509–521, 1998.\n[82] Thomas A. Henzinger, Benjamin Horowitz, and Rupak Majumdar. Rectangular hy-\nbrid games. In Jos C. M. Baeten and Sjouke Mauw, editors, CONCUR , volume 1664\nofLNCS , pages 320–335. Springer, 1999. ISBN 3-540-66425-4. doi:10.1007 /3-\n540-48320-9_23.\n[83] Claire J. Tomlin, John Lygeros, and Shankar Sastry. A game theoretic approach to\ncontroller design for hybrid systems. Proc. IEEE , 88(7):949–970, 2000.\n[84] S. Dharmatti and M. Ramaswamy. Zero-sum di \u000berential games involving hybrid\ncontrols. Journal of Optimization Theory and Applications , 128(1):75–102, 2006.\nISSN 0022-3239. doi:10.1007 /s10957-005-7558-x.\n[85] Patricia Bouyer, Thomas Brihaye, and Fabrice Chevalier. O-minimal hybrid reach-\nability games. Logical Methods in Computer Science , 6(1), 2010.\n[86] Vladimeros Vladimerou, Pavithra Prabhakar, Mahesh Viswanathan, and Geir E.\nDullerud. Specifications for decidable hybrid games. Theor. Comput. Sci. , 412(48):\n6770–6785, 2011. doi:10.1016 /j.tcs.2011.08.036.\n[87] Jan-David Quesel and André Platzer. Playing hybrid games with KeYmaera.\nIn Bernhard Gramlich, Dale Miller, and Ulrike Sattler, editors, IJCAR , vol-\nume 7364 of LNCS , pages 439–453. Springer, 2012. ISBN 978-3-642-31364-6.\ndoi:10.1007 /978-3-642-31365-3_34.\n[88] André Platzer. Di \u000berential dynamic logic for verifying parametric hybrid systems.\nIn Nicola Olivetti, editor, TABLEAUX , volume 4548 of LNCS , pages 216–232.\nSpringer, 2007. ISBN 978-3-540-73098-9. doi:10.1007 /978-3-540-73099-6_17.\n[89] Dexter Kozen. Kleene algebra with tests. ACM Trans. Program. Lang. Syst. , 19(3):\n427–443, 1997.\n[90] Ken Thompson. Regular expression search algorithm. Commun. ACM , 11(6):419–\n422, 1968. doi:10.1145 /363347.363387.\n[91] André Platzer. A temporal dynamic logic for verifying hybrid system invariants.\nIn Sergei N. Artëmov and Anil Nerode, editors, LFCS , volume 4514 of LNCS ,\npages 457–471. Springer, 2007. ISBN 978-3-540-72732-3. doi:10.1007 /978-3-\n540-72734-7_32.\n[92] Konstantin Sergeevich Sibirsky. Introduction to Topological Dynamics . Noordho \u000b,\nLeyden, 1975.\n[93] Michael S. Branicky. Studies in Hybrid Systems: Modeling, Analysis, and Con-\ntrol. PhD thesis, Dept. Elec. Eng. and Computer Sci., Massachusetts Inst. Technol.,\nCambridge, MA, 1995.\n[94] Vaughan R. Pratt. Semantical considerations on Floyd-Hoare logic. In FOCS , pages\n109–121. IEEE, 1976.\n[95] David Harel, Dexter Kozen, and Jerzy Tiuryn. Dynamic logic . MIT Press, 2000.\n[96] Alfred Tarski. A Decision Method for Elementary Algebra and Geometry . Univer-\nsity of California Press, Berkeley, 2nd edition, 1951.\n[97] Rudolf Carnap. Modalities and quantification. J. Symb. Log. , 11(2):33–64, 1946.\n[98] G. E. Hughes and M. J. Cresswell. A New Introduction to Modal Logic . Routledge,\n1996. ISBN 978-0415125994.\n[99] Robert W. Floyd. Assigning meanings to programs. In J. T. Schwartz, editor,\nMathematical Aspects of Computer Science, Proceedings of Symposia in Applied\nMathematics , volume 19, pages 19–32, Providence, 1967. AMS.\n[100] Charles Antony Richard Hoare. An axiomatic basis for computer programming.\nCommun. ACM , 12(10):576–580, 1969.\n[101] André Platzer. The structure of di \u000berential invariants and di \u000berential cut elimina-\ntion. Logical Methods in Computer Science , 8(4):1–38, 2012. ISSN 1860-5974.\ndoi:10.2168 /LMCS-8(4:16)2012.\n[102] André Platzer. A di \u000berential operator approach to equational di \u000berential invariants.\nIn Lennart Beringer and Amy Felty, editors, ITP, volume 7406 of LNCS , pages 28–\n48. Springer, 2012. ISBN 978-3-642-32346-1. doi:10.1007 /978-3-642-32347-8_3.\n[103] Oded Maler, Zohar Manna, and Amir Pnueli. From timed to hybrid systems. In J. W.\nde Bakker, Cornelis Huizing, Willem P. de Roever, and Grzegorz Rozenberg, edi-\ntors, REX Workshop , volume 600 of LNCS , pages 447–484. Springer, 1991. ISBN\n3-540-55564-1. doi:10.1007 /BFb0032003.\n[104] Anil Nerode. Logic and control. In S. Barry Cooper, Benedikt Löwe, and Andrea\nSorbi, editors, CiE, volume 4497 of LNCS , pages 585–597. Springer, 2007. ISBN\n978-3-540-73000-2.\n[105] Kurt Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und\nverwandter Systeme I. Mon. hefte Math. Phys. , 38:173–198, 1931.\n[106] Cristopher Moore. Unpredictability and undecidability in dynamical systems. Phys.\nRev. Lett. , 64:2354–2357, May 1990. doi:10.1103 /PhysRevLett.64.2354.\n[107] Michael S. Branicky. Universal computation and other capabilities of hybrid and\ncontinuous dynamical systems. Theor. Comput. Sci. , 138(1):67–100, 1995.\n[108] Daniel Silva Graça, Manuel L. Campagnolo, and Jorge Buescu. Computability with\npolynomial di \u000berential equations. Advances in Applied Mathematics , 2007.\n[109] Pieter Collins and Daniel S. Graça. E \u000bective computability of solutions of di \u000ber-\nential inclusions the ten thousand monkeys approach. J. UCS , 15(6):1162–1185,\n2009. doi:10.3217 /jucs-015-06-1162.\n[110] André Platzer. Logical analysis of hybrid systems: A complete answer to a com-\nplexity challenge. Journal of Automata, Languages and Combinatorics , 17(2-4):\n265–275, 2012.\n[111] André Platzer and Edmund M. Clarke. Computing di \u000berential invariants of hy-\nbrid systems as fixedpoints. In Aarti Gupta and Sharad Malik, editors, CAV, vol-\nume 5123 of LNCS , pages 176–189. Springer, 2008. ISBN 978-3-540-70543-7.\ndoi:10.1007 /978-3-540-70545-1_17.\n[112] André Platzer and Edmund M. Clarke. Computing di \u000berential invariants of hy-\nbrid systems as fixedpoints. Form. Methods Syst. Des. , 35(1):98–120, 2009. ISSN\n0925-9856. doi:10.1007 /s10703-009-0079-8. Special issue for selected papers from\nCA V’08.\n[113] André Platzer and Edmund M. Clarke. Formal verification of curved flight collision\navoidance maneuvers: A case study. In Ana Cavalcanti and Dennis Dams, editors,\nFM, volume 5850 of LNCS , pages 547–562. Springer, 2009. ISBN 978-3-642-\n05088-6. doi:10.1007 /978-3-642-05089-3_35.\n[114] André Platzer and Jan-David Quesel. European Train Control System: A case study\nin formal verification. In Karin Breitman and Ana Cavalcanti, editors, ICFEM ,\nvolume 5885 of LNCS , pages 246–265. Springer, 2009. ISBN 978-3-642-10372-8.\ndoi:10.1007 /978-3-642-10373-5_13.\n[115] Stefan Mitsch, Khalil Ghorbal, and André Platzer. On provably safe obstacle avoid-\nance for autonomous robotic ground vehicles. In Paul Newman, Dieter Fox, and\nDavid Hsu, editors, Robotics: Science and Systems , 2013. ISBN 978-981-07-3937-\n9.\n[116] Khalil Ghorbal and André Platzer. Characterizing algebraic invariants by di \u000ber-\nential radical invariants. In Erika Ábrahám and Klaus Havelund, editors, TACAS ,\nvolume 8413 of LNCS , pages 279–294. Springer, 2014. ISBN 978-3-642-54861-1.\ndoi:10.1007 /978-3-642-54862-8_19.\n[117] Khalil Ghorbal, Andrew Sogokon, and André Platzer. Invariance of conjunctions of\npolynomial equalities for algebraic di \u000berential equations. In Markus Müller-Olm\nand Helmut Seidl, editors, SAS, volume 8723 of LNCS , pages 151–167. Springer,\n2014. ISBN 978-3-319-10935-0. doi:10.1007 /978-3-319-10936-7_10.\n[118] André Platzer. Di \u000berential game logic. CoRR , abs/1408.1980, 2014.\n[119] Robert L. Grossman, Anil Nerode, Anders P. Ravn, and Hans Rischel, editors. Hy-\nbrid Systems , volume 736 of LNCS , 1993. Springer. ISBN 3-540-57318-6.\n[120] LICS. Proceedings of the 27th Annual ACM /IEEE Symposium on Logic in Com-\nputer Science, LICS 2012, Dubrovnik, Croatia, June 25–28, 2012 , 2012. IEEE.\nISBN 978-1-4673-2263-8.\n[121] Panos J. Antsaklis, Wolf Kohn, Anil Nerode, and Shankar Sastry, editors. Hybrid\nSystems IV , volume 1273 of LNCS , 1997. Springer. ISBN 3-540-63358-8.\n[122] João P. Hespanha and Ashish Tiwari, editors. Hybrid Systems: Computation and\nControl, 9th International Workshop, HSCC 2006, Santa Barbara, CA, USA, March\n29-31, 2006, Proceedings , volume 3927 of LNCS , 2006. Springer. ISBN 3-540-\n33170-0.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "7C1G0xOeW_AA",
"year": null,
"venue": "Bull. EATCS 2014",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/247/236",
"forum_link": "https://openreview.net/forum?id=7C1G0xOeW_AA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Thoughts on Paper Publishing in the Digital Age",
"authors": [
"Sanjeev Arora"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Thoughts on paper publishing\nin the digital age\u0003\nSanjeev Arora\nPrinceton University, USA\nhttp://www.cs.princeton.edu/~arora\nWhat role should journals and conferences play in the age of arxiv, twitter and\nother yet-to-be-invented digital wonders? Detecting among many colleagues a\ngeneral impatience with the status quo, I wrote this article to generate more public\ndiscussion. It is addressed solely to colleagues in theoretical computer science,\nnot other fields. I also don’t address legal issues such as copyright, costs, and free\naccess because these have been extensively discussed elsewhere.\nDespite its small size, theoretical CS has been remarkably successful. An in-\ncredible edifice of ideas was created together with an open culture that values the\nneed to address papers and talks to nonspecialists. This allows ideas and tech-\nniques to jump rapidly across subspecialities. We should think hard about how to\nbest continue that culture in the digital era. (Impact on promotion /tenure systems\nalso must be carefully weighed.)\nBelow I will survey the major proposed approaches and their pros and cons,\nand my own thoughts on them. Anybody who has investigated this topic quickly\ndiscovers that it su \u000bers from the boolean algebra obstacle : Given nbinary op-\ntions, you will find supporters —and good arguments—for all possible 2ncom-\nbinations. There is likely no universally accepted solution. I therefore propose\nstarting a specific public process for continuing this discussion.\n1 The three approaches\nLet me list the three broad approaches I have encountered.\nArxiv-only. Conceivably, both journals and conferences could today be replaced\nby arxiv and other repositories. Some proponents of this view point to instant “im-\npact metrics” (page views, number of tweets, etc.) this solution comes with, seem-\ningly tailormade for hiring and promotion cases. Machine learning researcher\n\u0003This article first appeared as a blog post on windowsontheory.org in October 2013.\nYann LeCun has taken this viewpoint to its logical end1, arguing for a “free-\nmarket” system whereby papers appear only in digital repositories and are subject\nto a distributed market model for refereeing /commenting. Independent consortia\nof reviewers (basically a new name for journals and conferences of the future??)\nwould decide to publish reviews of arxiv-ed papers, with or without the permis-\nsion of the papers’ authors. Recent articles in Nature also explored such ideas (see\nthe bibliography).\nJournals-only. At the other end of the spectrum, Lance Fortnow is troubled by\nthe proliferation of conferences and steady decline of journals. He thinks this\nhas reduced research quality. In fact, since digital repositories allow very rapid\ndissemination, he thinks conferences are useful only to meet others and catch up\non the latest research. He seeks to restore the importance of journals, lower the\nprestige of conferences by greatly raising acceptance rates, and have one flagship\nconference that everybody feels compelled to attend.\nFortnow’s views resonate with some colleagues I have talked to. Another point\nthey raise is the costs (money and time) associated with conference presentations,\nwhich perhaps penalize less established researchers. This may be true, but we\nneed more data on this point.\nHowever, I cannot help noticing though that the arguments in favor of journals\nsometimes overlook (perhaps unconsciously) the possibility that the journal-only\nvision can be pursued with many radically di \u000berent approaches: each of Math,\nPhysics, Biology, Economics etc. has a di \u000berent one. It is unclear a priori if all\nof these approaches are better than our conference-based approaches (anecdotally,\ncolleagues from some these other fields do like aspects of our top conferences).\nThe current hybrid approach, adapted as needed to the digital era. This is the\napproach I have ended up favoring. Its advantage is that it can borrow good ideas\nfrom all other proposals while causing the least upheaval.\n2 Is Arxiv enough?\nPhysics is one of several fields that have taken enthusiastically to e-publishing. A\nphysics paper on arxiv may have follow-up papers within weeks or even days. Not\nall papers may appear in journals, and many that do, appear in truncated form.\nWhile such arxiv-based models have plus points, one also sees dangers:(a)\nincentive to write shallow and incremental papers; (b) more priority disputes and\nthe temptation to publish sketchy ideas in order to later claim full or at least partial\ncredit; (c) lack of incentive for good reviewers to volunteer time reviewing enough\npapers (despite the attempted analogy to a free market in LeCun’s proposal).\n1http://yann.lecun.com/ex/pamphlets/publishing-models.html\nLet me elaborate on (b). The following already seems to be an axiom among\nmy younger colleagues: “If a result appears on arxiv, you have a few days to put\nup your independent manuscript. After that you can’t claim independent discov-\nery.” Some other disciplines have already moved on to a more cut-throat model:\n“Whoever gets to arxiv first wins.”\nSurely, this must incentivize hasty writing and incomprehensible papers. Or\nmaybe papers that even have errors, to be fixed in subsequent revisions. In the\npast we relied on conference committees and journals to adjudicate such disputes.\nHow would that happen in a distributed market? One can imagine systems with\nfeedback buttons, reliability ratings etc., but it seems a dubious method of doing\nscience.\nSeveral arguments have been cited in favor of the “free market reviewing” ap-\nproach: unknown researchers publishing on equal footing with established ones;\nthe superiority of “wisdom of the crowds”over the PC’s “groupthink.”But I sus-\npect in reality things may turn out less fair than the current system. Prominent\nresearchers with bigger megaphones (e.g., with more blog readers or twitter fol-\nlowers or friends willing to review their papers) will tend to benefit. Power centers\nwill inevitably form.\nBy contrast, conferences in theoretical CS—perhaps because each PC is a\nfresh set of 20-25 individuals—have a good track record of showcasing great work\nby grad students and postdocs. Papers by unknown authors may get awards while\nthose by Turing award winners may get rejected. (Aside: Remember, my com-\nments pertain merely to theoretical CS. Possibly the dynamic was di \u000berent in\nother fields, necessitating a switch to double-blind refereeing systems2)\nTo sum up, while very useful, digital repositories do not seem to me adequate\nreplacements for conferences and journals.\n3 Conference or Journals?\nHistorically, conferences came to dominate computer science because they al-\nlowed fast dissemination. Today, this goal can increasingly be met by other means,\nso how can we still justify conferences? Fortnow’s question is extremely pertinent.\nBelow, I list several reasons why conferences still make a lot of sense to me.\nMy focus here is on promoting better, more creative science—I worry less about\npromotion /tenure policies since they will quickly adjust to accommodate any new\ndissemination method we choose (including arxiv and twitter). Also, I apologize\nin advance for occasional forays into pop psychology.\n2For a quick introduction to arguments for and against double-blind reviewing systems see\nJohn Langford’s blog post at http: //hunch.net /?p=2656.\nA recommendation about which papers to read, plus 20-min introductory\ntalks\nThis is arguably the biggest plus of our top conferences: providing us a map to\nhelp us navigate through the sea of papers that get written (especially outside our\nsubspecialities). As a bonus, they force authors to write a 8-10-page description\nof this work, and record a 20-min introductory talk on it.\nThe 20-min talk has been good for our field both by propagating ideas across\nsubspecialities, and training young researchers to interact with nonspecialists. In\nfact, the talks are the main reason I attend STOC /FOCS (though the social net-\nworking is a nice bonus). Of course, I could stay at home and watch videotaped\ntalks but, really, who does that?\nA caveat: Inevitably, any recommendation system —conference or journals—\nwill end up shaping the research directions, and will be used for tenure and pro-\nmotions. This vests some power in the recommendation system. But at least in\nthe conference system the power rotates to a di \u000berent groups of 25 people each\ntime, instead of staying centralised with a (relatively fixed) journal board.\nEqually obviously, the quest for such recommendations engenders competi-\ntion. Others see this as problematic (notably, Oded Goldreich3), but I am more\nsanguine —humans (even toddlers) are naturals at competition, and also at coop-\neration.\nIncentive system for researchers to produce a substantial piece of work, and\nthen write it up, sort of comprehensibly, in 10 pages\nThe incentives in the arxiv model are quite the opposite—more frequent, insub-\nstantial, and hastily written works.\nIncentivizing substantial works is also the frequent justification for the move\nto journal-only model (as in Fortnow’s article) but we should realize that while\nthis may be true for some fields —maybe Math– it is not uniformly true across\nother journal-only fields.\nThe 10-page limit —archaic relic of the papyrus era— and the PC review\nmodel has led to our tradition of writing papers that are sort-of comprehensible\nto nonspecialists. Journals possibly reward a writing style geared to specialist\nreviewers.\n3Seehttp://www.wisdom.weizmann.ac.il/~oded/PDF/struggle.pdf\nIncentive system for good researchers to (sort-of) review lots of papers\nEvery field has to find its own balance between refereeing capacity and number\nof papers that get written4. \"Conferences vs Journals\" is too simplistic a way of\nthinking about this issue, since journals in di \u000berent fields di \u000ber a lot.\nJournals in the life sciences (and many fields of physics) have more stringent\npage limits and faster refereeing time than our conferences. V olume of papers is\nhigh, and quality of reviews is variable. Economics journals have less stringent\npage limits (but still more stringent than our journals), require many rounds of\nrewrites, and consequently, have huge backlogs. This can make it di \u000ecult for\nyoung researchers to get published—many have no publications when they finish\ntheir PhD. Also, accept rates of <10% in top journals force editors and reviewers\nto become risk-averse (according to my friends). Math journals seem to work well\nand have reasonable turnaround times, but then the system system vests a lot more\npower and prestige with the editorial boards of top journals. Also, mathematicians\nseem to write fewer papers and are willing to spend a lot of time on refereeing.\nThe PC review system in theoretical CS is not perfect, but this semi-refereed\nmodel represents our own best e \u000bort to balance refereeing capacity with the num-\nber of papers. It is generally considered bad form to turn down requests from PC\nmembers to review papers. Anecdotal evidence suggests to me that the quality\nof reviews at our top conferences is not worse than in other fields with a similar\nvolume of papers.\nAlso, many researchers seem happier to serve on a STOC /FOCS PC once\nevery 3–4 years rather than on a journal board for 3–4 years. Maybe humans\nprefer shorter but more intense pain to a longer and less intense one. Or perhaps\njournal boards are less interesting because you end up handling papers in your\nown subspeciality, including those you already saw 2 +years ago.\nClearing point for deciding upon priority, novelty, correctness etc. of claimed\nresults\nConferences can do it faster and better than journals in most cases (but the systems\nare di \u000berent: a jury of 20–25 PC members +specialist subreviewers versus a\njury of one editor and 3 specialist reviewers). The informal refereeing system\nat conferences at first glance seems to invite abuse but I can think of very few\naccepted papers at STOC /FOCS in the last 30 years that turned out to be seriously\nflawed (and often those were recognized as controversial when accepted).5\n4Fortnow and others have suggested that conferences cause the field to produce too many\npapers. This thesis deserves wider discussion; I suspect tenure /promotion /grant policies play a\nbigger role.\n5Again, my conclusions are not universally applicable. Colleagues in some other fields think\nthat flawed papers slip too often into their top conferences. Even in theoretical CS, fields like\nA synchronization mechanism for our field\nIs it just my imagination, or do conference deadlines actually enhance collabora-\ntions and improve creativity? Half-imagined results get fleshed out as people get\ntogether in the months or weeks before the deadline (and I am not referring to\nca\u000beine-fueled late-night finishes, which I avoid). We need this synchronization\nto structure our busy lives, and neither arxiv nor journals provide it. If you don’t\ncare for the human weaknesses this argument stands on (or believe in a platonic\nideal of scientists engaged in heroic quests unfazed by mundane complications) I\nshould mention Boaz Barak’s alternative explanation: sometimes correlated equi-\nlibria6are superior to Nash equilibria.\n4 Needed: Process to rethink the conference system\nI believe that our conference system led to the current innovative culture in theo-\nretical CS that allows radically new research directions to pop up every few years.\nIt would be good to update these conferences for the digital era, while maintaining\ntheir best qualities.\nCurrently, the process for improving the conference system has been delegated\nto PC chairs and has resulted in some welcome experimentation: a day of work-\nshops, poster session, recorded talks, no paper proceedings, better feedback for\nauthors, etc.\nBut incremental experimentation has the drawback of sowing confusion and\ncynicism. For example: Is the 10-page conference version important or not?\nShould the number of accepted papers be increased or decreased? It is hard to\nknow by experimenting for a year or two. Also, beware of the boolean algebra\nobstacle mentioned earlier. There is likely no universally accepted solution and so\nany structure arrived at via experimentation will appear ad hoc, and garner little\nrespect.\nThe only feasible alternative seems to be a centrally computed solution . ACM\nSIGACT (maybe in partnership with EATCS) should create a group of, say, 7 \u00009\npeople —suitably diverse in terms of seniority level, areas of expertise, gender,\ncountries of residence etc.—that will suggest a future blueprint for theory con-\nferences. This group should produce such a document after conducting an open\ndiscussion, while accepting public input —including fleshed-out proposals—via\na moderated forum. Such a public process could have a rejuvenating e \u000bect on our\nfield and its conferences.\ncryptography seems to need more careful refereeing.\n6http://en.wikipedia.org/wiki/Correlated_equilibrium\nWhile no single solution will be perfect or universally loved, at least if it is\nfound via the above open process, we will all come to appreciate the competing\nfactors being balanced.\n4.1 My own thoughts for further improvement\nHere are some of my thoughts, but I remain open to other opinions.\n\u000fKeep the conference format (say 12 pages, 11pt) —independent of whether\nor not the conference has a published proceeding. Why: a) Promotes tighter,\nfocused writing; (b) Improves our ability to keep abreast of research outside\nour specialities.\nBut to reduce work and give authors an incentive to produce a readable\nversion, make the submission format identical to the published format (this\nis the norm at ML conferences).\nHaving done my share of grumbling about the conference format, on bal-\nance I think it is important for our field that 50-page arxived papers should\nbe accompanied by shorter, more readable, versions. If you think 12 pages\nis too few, try vying for the privilege of publishing your result in Science\nandNature , in 2–4 pages!\nUnfortunately, we seem to be sliding into a hodge-podge system which I\nattribute to poor communication and feedback by conference PCs (e.g., un-\nwillingness to penalize bad writing).\n\u000fRespond to the arxiv challenge.\nFor example, why delay the conference until the proceedings are ready?\nHave the submission deadline 3 months before the conference — this gives\nplenty of time for review and revision. The proceeding is less and less\nimportant, and could even be turned into a fully reviewed \"journal.\"\nA related idea (tested in fields such as databases and ML) combines some\nof the speed of arxiv with reliable timestamping. The conference has two\nor more submission deadlines a year—papers appear on the website as soon\nas they are accepted in the first cycle, and papers rejected in the first cycle\ncannot be resubmitted for another year. This spreads out a PC’s work over\na longer period, whose pluses and minuses have to be studied.\n\u000fIncrease number of acceptances moderately to accommodate the increase\nin the size of the field. (Beware though of Parkinson’s law7: submissions\n7Parkinson’s law: “Work expands so as to fill the time available for its completion”.\nincrease to fill all available refereeing capacity. So don’t agonize if accep-\ntance rates stay below 30% despite this increase.) Do not treat all accepted\npapers equally: have some presented in plenary sessions, and rest in 3 par-\nallel sessions8.\nStretching FOCS beyond 3 days seems problematic since it is in the middle\nof the Fall term, but possibly STOC could be moved into June and length-\nened to 4 or even 5 days? (Unfortunately this would bring it into even more\ndirect conflict with ICALP than it currently is.)\n\u000fAvoid the temptation to delegate most reviewing to students /postdocs and\nconference committee work to junior people. In the past this was done as\na welcome way to empower junior people, but possibly this correction may\nhave now gone too far.\n\u000fPrepare a clear statement on how priority /correctness should be judged in\nthe age of arxiv. Avoid a slide to the system “First to get on arxiv wins.”\n\u000fIt would be nice—perhaps independent of conferences—to have a forum for\nposting reviews /comments on theory papers. (Hints of LeCun’s ideas here.\nTim Roughgarden’s phrasing is that he’d like to see “more radio channels.”)\nTo be useful this forum must avoid the vicious smallness of blog comments.\nRequiring users to use verifiable identities should preclude the worst abuses\n(the system only needs to scale to a couple thousand users).\n\u000fPoll senior researchers for changes that would attract them back to con-\nferences. I know several who have stopped participating in STOC /FOCS\n(people 40 and older seem a distinct minority even on PCs in recent years,\nand certainly on the conference floor).\n\u000fLast but most important: keep the various points made in this article (or\nany other set of principles discussed and agreed upon collectively) in mind\nwhen proposing new changes.\n5 Needed: less cynicism\nI must admit that when I started this thought process and discussions with col-\nleagues, I started out somewhat skeptical of conferences. I ended up strongly in\nfavor. I decided to be more proactive in combating the cynicism or pessimism I\noften see in such discussions; hence this essay.\n8Interestingly, Fortnow and Goldreich have made similar suggestions. Thus I agree with some\nof their “remedies” even though I don’t agree with the “diagnoses.”\nPeople’s views tend to be colored by their last conference rejection. Typically,\nsenior people complain about inexperienced PCs valuing technical sophistication\nover conceptual contributions. Young researchers wish to publish more to estab-\nlish themselves and feel anxious about being judged by a power structure that they\ndon’t fully understand or feel part of. Such anxieties have existed since prehis-\ntoric times—there is no way to do research and not have it be misjudged at times.\nCynicism is not a good response. Stay involved!\n6 Bibliography /Further reading\n\u000fLance Fortnow. Time for computer science to grow up . Communications of\nthe ACM 52(8):33–35, 2009.\n\u000fYann LeCun. A New Publishing Model in Computer Science . Pamphlet,\navailable from the author’s web site9, that proposes a new publishing model\nbased on an open repository and open (but anonymous) reviews.\n\u000fThe Future of Publishing . Nature 495(7442), 28 March 2013. This special\nissue of Nature is devoted to the topic of this article10.\n\u000fW.S. Brown, J.R. Pierce and J.F. Traub. The future of scientific journals. A\ncomputer-based system will enable a subscriber to receive a personalized\nstream of papers . Science 158(3805):1153–1159, 1 December 1967.\nAcknowledgements I have had useful initial discussions with colleagues, such\nas Moses, Bernard, Avi, Mark, Zeev, Boaz and Ankur. I have also benefited from\nonline feedback from blog comments and personal communications from many\nother colleagues.\n9http://yann.lecun.com/ex/pamphlets/publishing-models.html\n10For instance, see http://www.nature.com/nature/journal/v495/n7442/full/\n495437a.html .",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jUTw2zjtmu0",
"year": null,
"venue": "Bull. EATCS 2012",
"pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/46/42",
"forum_link": "https://openreview.net/forum?id=jUTw2zjtmu0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The Gödel Price 2013. Call for Nominations",
"authors": [
"Sanjeev Arora"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "TheG¨odel Price2013\nCall for Nominations\nDeadline: January 11, 2013\nThe Gödel Prize for outstanding papers in the area of theoretical computer sci-\nence is sponsored jointly by the European Association for Theoretical ComputerScience (EA TCS) and the Association for Computing Machinery, Special Inter-est Group on Algorithms and Computation Theory (ACM-SIGACT). The awardis presented annually, with the presentation taking place alternately at the Inter-national Colloquium on Automata, Languages, and Programming (ICALP) andthe ACM Symposium on Theory of Computing (STOC). The 21st prize will beawarded at the 45th ACM Symposium on Theory of Computing in Palo Alto in\nJune 2013.\nThe Prize is named in honor of Kurt Gödel in recognition of his major contribu-\ntions to mathematical logic and of his interest, discovered in a letter he wrote to\nJohn von Neumann shortly before von Neumann’s death, in what has become the\nfamous P versus NP question. The Prize includes an award of USD 5000.\nA W ARD COMMITTEE: The winner of the Prize is selected by a committee of\nsix members. The EA TCS President and the SIGACT Chair each appoint three\nmembers to the committee, to serve staggered three-year terms. The committee ischaired alternately by representatives of EA TCS and SIGACT. The 2013 A wardCommittee consists of Krzysztof R. Apt (CWI Amsterdam and University of Am-sterdam), Sanjeev Arora, Chair (Princeton University), Josep Díaz (UniversitatPolitècnica de Catalunya), Giuseppe Italiano (Università di Roma Tor V ergata),Daniel Spielman (Y ale University), and Éva Tardos (Cornell University).\nELIGIBILITY : The rule for the 2013 Prize is given below and supersedes any\ndifferent interpretation of the parametric rule to be found on websites on both\nBEATCS no 108 EA TCS MA TTERS\n18SIGACT and EA TCS. Any research paper or series of papers by a single author\nor by a team of authors is deemed eligible if\n(i) the paper was published in a recognized refereed journal no later than De-\ncember 31, 2012;\n(ii) the main results were not published (in either preliminary or final form) in\na journal or conference proceedings before January 1st, 2000.\nThe research work nominated for the award should be in the area of theoretical\ncomputer science. The term theoretical computer science is meant to encompass,\nbut is not restricted to, those areas covered by ICALP and STOC. Nominations areencouraged from the broadest spectrum of the theoretical computer science com-munity so as to ensure that potential award winning papers are not overlooked.The A ward Committee shall have the ultimate authority to decide whether a par-ticular paper is eligible for the Prize.\nNOMINA TIONS: Nominations for the award should be submitted by email to\nthe A ward Committee Chair:\nSanjeev Arora [email protected]\nTo be considered, nominations for the 2013 Prize must be received by\nJanuary 11, 2013 .\nEvery member of the scientific community can make nominations. The A ward\nCommittee may actively solicit nominations. A nomination should contain a briefsummary of the technical content of the paper(s) and a brief explanation of itssignificance. A printable copy of the research paper or papers should accompanythe nomination. The nomination must state the date and venue of the first con-\nference or workshop publication or state that no such publication has occurred.\nThe work may be in any language. However, if it is not in English, a more ex-\ntended summary written in English should be enclosed. To be considered for theaward, the paper or series of papers must be recommended by at least two individ-\nuals, either in the form of two distinct nominations or one nomination including\nrecommendations from two di fferent people. Additional recommendations may\nalso be enclosed and are generally useful. The entire package should be sent in\na single email whenever possible. Those intending to submit a nomination are\nencouraged to contact the A ward Committee Chair by email well in advance. TheChair will answer any questions about eligibility, encourage coordination amongdifferent nominators for the same paper(s), and also accept informal proposals of\npotential nominees or tentative o ffers to prepare formal nominations. The A ward\nCommittee maintains a folder of past nominations for eligible papers, but fresh\nThe Bulletin of the EATCS\n19nominations for the same papers (especially if they highlight new evidence of\nimpact) are always welcome.\nSELECTION PROCESS: The A ward Committee is free to use any other sources\nof information in addition to the ones mentioned above. It may split the award\namong multiple papers or declare no winner at all. All matters relating to the\nselection process left unspecified in this document are left to the discretion of the\nA ward Committee.\nPAST WINNERS:\n2012: Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria . Com-\nputer Science Review 3 (2): 65–69, 2009.\nTim Roughgarden, Éva Tardos. How bad is selfish routing? . Journal of the\nACM 49 (2): 236–259, 2002.Noam Nisan and Amir Ronen. Algorithmic Mechanism Design . Games and\nEconomic Behavior 35 (1–2): 166–196, 2001.\n2011: Johan Håstad. Some optimal inaproximability results , Journal of the ACM\n48 (2001), 798–859.\n2010: S. Arora. Polynomial-time approximation schemes for Euclidean TSP and\nother geometric problems , Journal of the ACM 45(5):753–782, 1998.\nJ.S.B. Mitchell. Guillotine subdivisions approximate polygonal subdivi-\nsions: A simple polynomial-time approximation scheme for geometric TSP ,\nk-MST, and related problems , SIAM J. Computing 28(4):1298-1309, 1999.\n2009: Omer Reingold, Salil V adhan, and Avi Wigderson, Entropywaves, the zig-\nzag graph product, and new constant-degree expanders , Annals of Mathe-\nmatics, 155:157–187, 2002.Omer Reingold, Undirected connectivity in log-space , Journal of the ACM\n55:1–24, 2008.\n2008: Daniel A. Spielman and Shang-Hua Teng, Smoothed analysis of algo-\nrithms: Why the simplex algorithm usually takes polynomial time , Journal\nof the ACM, 51:385-463, 2004.\n2007: Alexander A. Razborov and Steven Rudich, Natural Proofs , Journal of\nComputer and System Sciences, 55:24–35, 1997.\n2006: Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, PRIMES is in P , An-\nnals of Mathematics, 160:1–13, 2004.\nBEATCS no 108 EA TCS MA TTERS\n202005: Noga Alon, Y ossi Matias and Mario Szegedy, The space complexity of\napproximating the frequency moments , Journal of Computer and System\nSciences, 58:137–147, 1999.\n2004: Maurice Herlihy and Nir Shavit, The Topological Structure of Asynchronous\nComputation , Journal of the ACM, 46:858–923, 1999.\nMichael Saks and Fotios Zaharoglou, Wait-Free k-Set Agreement Is Impos-\nsible: The Topology of Public Knowledge , SIAM Journal of Computing,\n29:1449–1483, 2000.\n2003: Y oav Freund and Robert Schapire, A Decision Theoretic Generalization of\nOn-Line Learning and an Application to Boosting , Journal of Computer and\nSystem Sciences 55:119–139, 1997.\n2002: Géraud Sénizergues, L(A)=L(B)? Decidability results from complete for-\nmal systems , Theoretical Computer Science 251:1–166, 2001.\n2001: Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario\nSzegedy, Interactive proofs and the hardness of approximating cliques , Jour-\nnal of the ACM 43:268–292, 1996.\nSanjeev Arora and Shmuel Safra, Probabilistic checking of proofs: a new\ncharacterization of NP , Journal of the ACM 45:70–122, 1998.\nSanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and MarioSzegedy, Proof verification and the hardness of approximation problems ,\nJournal of the ACM 45:501–555, 1998.\n2000: Moshe Y . V ardi and Pierre Wolper, Reasoning about infinite computations ,\nInformation and Computation 115:1–37, 1994.\n1999: Peter W . Shor, Polynomial-time algorithms for prime factorization and\ndiscrete logarithms on a quantum computer , SIAM Journal on Computing\n26:1484–1509, 1997.\n1998: Seinosuke Toda, PP is as hard as the polynomial-time hierarchy , SIAM\nJournal on Computing 20:865–877, 1991.\n1997: Joseph Halpern and Y oram Moses, Knowledge and common knowledge in\na distributed environment , Journal of the ACM 37:549–587, 1990.\n1996: Alistair Sinclair and Mark Jerrum, Approximate counting unform gener-\nation and rapidly mixing Markov chains , Information and Computation\n82:93–133, 1989.Mark Jerrum and Alistair Sinclair, Approximating the permanent , SIAM\nJournal on Computing 18:1149–1178, 1989.\nThe Bulletin of the EATCS\n211995: Neil Immerman, Nondeterministic space is closed under complementation ,\nSIAM Journal on Computing 17:935–938, 1988.Róbert Szelepcsényi, The method of forced enumeration for nondeterminis-\ntic automata , Acta Informatica 26:279–284, 1988.\n1994: Johan Håstad, Almost optimal lower bounds for small depth circuits , Ad-\nvances in Computing Research 5:143–170, 1989.\n1993: László Babai and Shlomo Moran, Arthur-Merlin games: a randomized\nproof system and a hierarchy of complexity classes , Journal of Computer\nand System Sciences 36:254–276, 1988.\nShafi Goldwasser, Silvio Micali and Charles Racko ff,The knowledge com-\nplexity of interactive proof systems , SIAM Journal on Computing 18:186–\n208, 1989.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "9mOKFdtapCw",
"year": null,
"venue": "Bull. EATCS 2006",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=9mOKFdtapCw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Sublinear-Time Algorithms",
"authors": [
"Artur Czumaj",
"Christian Sohler"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2Qk7tlYVWcI",
"year": null,
"venue": "Bull. EATCS 2005",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=2Qk7tlYVWcI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Recursion vs Replication in Process Calculi: Expressiveness",
"authors": [
"Catuscia Palamidessi",
"Frank D. Valencia"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "r1e20BO7YX5",
"year": null,
"venue": "Bull. EATCS 2006",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=r1e20BO7YX5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Languages for Concurrency",
"authors": [
"Catuscia Palamidessi",
"Frank D. Valencia"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "X3d5z4x_D5",
"year": null,
"venue": "Bull. EATCS 2008",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=X3d5z4x_D5",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Precedence Constraint Scheduling and Connections to Dimension Theory of Partial Orders",
"authors": [
"Christoph Ambühl",
"Monaldo Mastrolilli",
"Nikolaus Mutsanas",
"Ola Svensson"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4Z2Jx3Exw53",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=4Z2Jx3Exw53",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer e1R7",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "U9gxnJUARd_",
"year": null,
"venue": null,
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=U9gxnJUARd_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Response to Reviewer E1Li",
"authors": [],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ktsKW_6yb6",
"year": null,
"venue": "ECAI 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=ktsKW_6yb6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "L0 Regularization based Fine-grained Neural Network Pruning Method",
"authors": [
"Qixin Xie",
"Chao Li",
"Boyu Diao",
"Zhulin An",
"Yongjun Xu"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "aw_BvWiaE6G",
"year": null,
"venue": "ECAI 2023",
"pdf_link": "https://ieeexplore.ieee.org/iel7/10193876/10193857/10193903.pdf",
"forum_link": "https://openreview.net/forum?id=aw_BvWiaE6G",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Compressed ECG Sensing Based Fast/Slow HR and Regular/Irregular Rhythm Recognition for Resource-Constrained Health Monitoring Devices",
"authors": [
"Jomole Varghese V",
"M. Sabarimalai Manikandan",
"Linga Reddy Cenkeramaddi"
],
"abstract": "By considering the resource-constrained affordable wearable or portable health monitoring devices, in this paper, we present a lightweight digital compressed ECG sensing with recognition of fast/slow and regular/irregular heartbeat patterns by using beat-to-beat intervals (BBIs) directly computed from compressed sensing (CS) measurements without ECG reconstruction process. For extracting BBIs, we presented a fast straightforward R-peak detection method in the CS domain without using sets of search-back detection rules unlike other methods. On the standard MIT-BIH arrhythmia database, the CS-ECG based R-peak detection method had an average accuracy of 99.47% with false positives of 381 beats and false negatives of 195 beats for a total of 109021 beats. The CS-ECG based HR classification method, with three classes of bradycardia, tachycardia and normal, had an accuracy of 96.75%, 91.18% and 99.02% based on the feature of number of BBIs and an accuracy of 95.64%, 82.35% and 98.81% based on the feature of average BBIs. The CS-ECG based regular/irregular rhythm (RIR) recognition method had a sensitivity (SE) of 93.17%, specificity (SP) of 96.87% and accuracy (ACC) of 94.83% based on the variation between the BBIs that are determined by comparing the difference between the successive BBIs with a predefined BBI threshold. The proposed method can reduce computational resources and energy reduction of 75% with the CS-based data reduction factor of 4 that has a great potential in energy-efficiency of battery operated wearable devices in long-term continuous health monitoring environments.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "B-vyIcqIVN",
"year": null,
"venue": "ECAI 2023",
"pdf_link": "https://ieeexplore.ieee.org/iel7/10193876/10193857/10194196.pdf",
"forum_link": "https://openreview.net/forum?id=B-vyIcqIVN",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fast Quality-Aware AMDF Based Pulse Rate Estimation from Compressed PPG Measurements for Wearable Vital Signs Monitor",
"authors": [
"P. N. Sivaranjini",
"M. Sabarimalai Manikandan",
"Linga Reddy Cenkeramaddi"
],
"abstract": "Modern wearable or portable health monitoring devices are capable of photoplethysmogram (PPG) sensing, processing, analyzing, storing and transferring signal and parameters wirelessly but are generally energy constrained and have more false alarms under noisy PPG recordings. In this paper, we present computationally-efficient reliable pulse rate (PR) estimation in compressed sensing (CS) domain. The proposed CS-PPG based PR estimation method consists of measurement generation, high-pass filtering, average magnitude difference function (AMDF) features based signal quality assessment (SQA), and AMDF based quality-aware PR estimation. The proposed unified framework is evaluated using a wide variety of normal and pathological PPG signals taken from five standard databases. The proposed framework had an average sensitivity (SE) of 98.75% and specificity (SP) of 63.35%. Results show that the CS-PPG based quality-aware PR estimation method had a mean absolute error (MAE) of 3.1 ± 4.6%, Bland-Altman ratio (BAR) of 9.2% and root-mean-square error (RMSE) of 5.6 which are not only comparable with the results of the PR estimation method with the original PPG signals but also the proposed framework reduces 80% of the overall computational load. The proposed unified framework has great benefits in reducing processing time and energy consumption and thus can maximize battery lifetime of battery-operated health monitors.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kJ6U-j4ZGE",
"year": null,
"venue": "ECAI 2021",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9514927/9515011/09515193.pdf",
"forum_link": "https://openreview.net/forum?id=kJ6U-j4ZGE",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Profiling consumers in a water distribution network using K-Means clustering and multiple pre-processing methods",
"authors": [
"Diana Arsene",
"Alexandru Predescu",
"Ciprian-Octavian Truica",
"Elena Simona Apostol",
"Mariana Mocanu",
"Costin Chiru"
],
"abstract": "Profiling consumers in a water distribution system is essential for achieving sustainability in terms of resource management and urban development. Unsupervised learning can provide data-driven decision support for evaluating the water demand patterns in a large network, while various pre-processing methods can be added to expand the level of detail in terms of consumer behavior. The K-Means clustering method is used on a dataset based on publicly available data collected from multiple households with an emphasis on the data processing pipeline and its influence on the resulting clusters. Seasonal decomposition is used to evaluate the weekly trends in the dataset, while data normalization provides an in-depth analysis of the patterns and relative variation in terms of consumer demand. The results show different perspectives on the consumer demand patterns which can provide additional details in terms of consumption (volume, pattern, variation).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "SqcZx1dG6p",
"year": null,
"venue": "ECAI 2022",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9847306/9847307/09847435.pdf",
"forum_link": "https://openreview.net/forum?id=SqcZx1dG6p",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Consumer profiling using clustering methods for georeferenced decision support in a water distribution system",
"authors": [
"Diana Arsene",
"Alexandru Predescu",
"Ciprian-Octavian Truica",
"Elena Simona Apostol",
"Mariana Mocanu",
"Costin Chiru"
],
"abstract": "Discovering the habits of consumers is essential for effective decision support in smart water networks. While smart water meters can provide detailed consumption data for individual households, additional information can be extracted based on the geographical coordinates, to highlight the distribution of consumer behaviors within a given area. In this paper, multiple processing stages are used to evaluate the available data collected from a previous study. The OPTICS clustering method is used to cluster the data based on coordinates, while K-Means clustering is used to extract the consumer patterns for each identified zone. The standard deviation of the seasonal component is used to classify the resulting consumer behaviors from the least desirable to the most desirable, towards achieving more sustainable behaviors and operations from the perspective of water resource management and urban water infrastructure. The results are promising towards the development of georeferenced decision support systems for water resource management.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "h_yFDU1c2cu",
"year": null,
"venue": "ECAI 2019",
"pdf_link": "https://ieeexplore.ieee.org/iel7/9033451/9041922/09042003.pdf",
"forum_link": "https://openreview.net/forum?id=h_yFDU1c2cu",
"arxiv_id": null,
"doi": null
}
|
{
"title": "DREAM Principles and FAIR Metrics from the PORTAL-DOORS Project for the Semantic Web",
"authors": [
"Adam Craig",
"Adarsh Ambati",
"Shiladitya Dutta",
"Pooja Kowshik",
"Sathvik Nori",
"S. Koby Taswell",
"Qiyuan Wu",
"Carl Taswell"
],
"abstract": "Articles published in Scientific Data by Wilkinson et al. argued for the adoption of the Findable, Accessible, Interoperable, and Reusable (FAIR) principles of data management without citing any of the prior work published by Taswell. However, these principles were first proposed and described by Taswell in 2006 as the foundation for work on the PORTAL-DOORS Project (PDP) and the Nexus-PORTAL-DOORS-Scribe (NPDS) cyberinfrastructure, and have been published in numerous conference presentations, journal articles, and patents. This work on PDP and NPDS has been continuously available since 2007 from a publicly accessible web site at www.portaldoors.org, and discussed in person at conferences with several key authors of the Wilkinson et al. papers. Paraphrasing without citing the PDP and NPDS principles while renaming them as the FAIR principles raises questions about both the `FAIRness' and the fairness of the authors of the Wilkinson et al. papers. Promoting these principles with the use of the term `metrics', which are not metrics by definition of the term metric as used in most fields of science, also raises questions about their commitment to maintaining consistency of usage for basic terminology across different fields of science as should be expected for terms in ontology mapping with knowledge engineering for the semantic web. Therefore, in the present report, we clarify the origin of their FAIR principles by identifying our PDP and NPDS principles that constitute the historical precedent for their FAIR principles. Moreover, as the comprehensively summarizing phrase for all of our PDP and NPDS principles, we rename them the DREAM principles with the acronym DREAM for Discoverable Data with Reproducible Results for Equivalent Entities with Accessible Attributes and Manageable Metadata. Finally, we define numerically valid quantitative FAIR metrics to monitor and measure the DREAM principles from the perspective of the most important principle, ie, the Fair Acknowledgment of Information Records and Fair Attribution to Indexed Reports, for maintaining fair standards of citation in scholarly research and publishing.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "U5s6wuLjBzJ",
"year": null,
"venue": "ECAI 2018",
"pdf_link": "https://ieeexplore.ieee.org/iel7/8672396/8678929/08679008.pdf",
"forum_link": "https://openreview.net/forum?id=U5s6wuLjBzJ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Implementation of Resonant Current Controllers Using 16-bit, Fixed-point Digital Signal Controllers",
"authors": [
"Sergiu Oprea",
"Stefan George Rosu",
"Constantin Radoi",
"Adriana Florescu",
"Mihail Stefan Teodorescu"
],
"abstract": "The resonant controllers offer significant advantages for control of power converters that operate with time-vary reference signals. However, this kind of control is affected by the errors introduced by discretization and quantization and may require floating-point Digital Signal Processors (DSP) for proper implementation. This paper addresses the issues related to the implementation of resonant current control using low-cost, 16-bit, fixed-point DSPs for cost-sensitive applications like the low-power residential microgrids. The performance of the proposed resonant controller is evaluated using a single-phase inverter that can be part of a grid-connected Uninterruptible Power Supply (UPS).",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1-pMLhKenxt",
"year": null,
"venue": "ECAI 2015",
"pdf_link": "https://ieeexplore.ieee.org/iel7/7297040/7301133/07301212.pdf",
"forum_link": "https://openreview.net/forum?id=1-pMLhKenxt",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the performance of an optimized NLMS algorithm",
"authors": [
"Constantin Paleologu",
"Silviu Ciochina",
"Razvan Caramalau",
"Jacob Benesty"
],
"abstract": "The recently proposed joint-optimized normalized least-mean-square (JO-NLMS) algorithm was developed in the context of a state variable model. Moreover, following the minimization of the system misalignment and using an iterative procedure for adjusting the system model parameter, this algorithm is able to achieve a proper compromise between the performance criteria (i.e., fast convergence/tracking and low misadjustment). In this paper, we present a performance analysis of the JO-NLMS algorithm, outlining some relations between its main parameters and indicating its good behavior.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hlRNBLV9kM",
"year": null,
"venue": "ECAI 2019",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=hlRNBLV9kM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "L0 Regularization based Fine-grained Neural Network Pruning Method",
"authors": [
"Qixin Xie",
"Chao Li",
"Boyu Diao",
"Zhulin An",
"Yongjun Xu"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PmsIBW5XpCF",
"year": null,
"venue": "ECIR (2) 2023",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PmsIBW5XpCF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Domain Adaptation for Anomaly Detection on Heterogeneous Graphs in E-Commerce",
"authors": [
"Li Zheng",
"Zhao Li",
"Jun Gao",
"Zhenpeng Li",
"Jia Wu",
"Chuan Zhou"
],
"abstract": "Anomaly detection models have been the indispensable infrastructure of e-commerce platforms. However, existing anomaly detection models on e-commerce platforms face the challenges of “cold-start” and heterogeneous graphs which contain multiple types of nodes and edges. The scarcity of labeled anomalous training samples on heterogeneous graphs hinders the training of reliable models for anomaly detection. Although recent work has made great efforts on using domain adaptation to share knowledge between similar domains, none of them considers the problem of domain adaptation between heterogeneous graphs. To this end, we propose a Domain Adaptation method for heterogeneous GRaph Anomaly Detection in E-commerce (DAGrade). Specifically, DAGrade is designed as a domain adaptation approach to transfer our knowledge of anomalous patterns from label-rich source domains to target domains without labels. We apply a heterogeneous graph attention neural network to model complex heterogeneous graphs collected from e-commerce platforms and use an adversarial training strategy to ensure that the generated node vectors of each domain lay in the common vector space. Experiments on real-life datasets show that our method is capable of transferring knowledge across different domains and achieves satisfactory results for online deployment.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iNu3rE2u-Bl",
"year": null,
"venue": "CoRR 2013",
"pdf_link": "http://arxiv.org/pdf/1311.5904v3",
"forum_link": "https://openreview.net/forum?id=iNu3rE2u-Bl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The IceProd Framework: Distributed Data Processing for the IceCube Neutrino Observatory",
"authors": [
"Mark G. Aartsen",
"Rasha U. Abbasi",
"Markus Ackermann",
"Jenni Adams",
"Juan Antonio Aguilar Sánchez",
"Markus Ahlers",
"David Altmann",
"Carlos A. Argüelles Delgado",
"Jan Auffenberg",
"Xinhua Bai",
"Michael F. Baker",
"Steven W. Barwick",
"Volker Baum",
"Ryan Bay",
"James J. Beatty",
"Julia K. Becker Tjus",
"Karl-Heinz Becker",
"Segev BenZvi",
"Patrick Berghaus",
"David Berley",
"Elisa Bernardini",
"Anna Bernhard",
"David Z. Besson",
"G. Binder",
"Daniel Bindig",
"Martin Bissok",
"Erik Blaufuss",
"Jan Blumenthal",
"David J. Boersma",
"Christian Bohm",
"Debanjan Bose",
"Sebastian Böser",
"Olga Botner",
"Lionel Brayeur",
"Hans-Peter Bretz",
"Anthony M. Brown",
"Ronald Bruijn",
"James Casey",
"Martin Casier",
"Dmitry Chirkin",
"Asen Christov",
"Brian John Christy",
"Ken Clark",
"Lew Classen",
"Fabian Clevermann",
"Stefan Coenders",
"Shirit Cohen",
"Doug F. Cowen",
"Angel H. Cruz Silva",
"Matthias Danninger",
"Jacob Daughhetee",
"James C. Davis",
"Melanie Day",
"Catherine De Clercq",
"Sam De Ridder",
"Paolo Desiati",
"Krijn D. de Vries",
"Meike de With",
"Tyce DeYoung",
"Juan Carlos Díaz-Vélez",
"Matthew Dunkman",
"Ryan Eagan",
"Benjamin Eberhardt",
"Björn Eichmann",
"Jonathan Eisch",
"Sebastian Euler",
"Paul A. Evenson",
"Oladipo O. Fadiran",
"Ali R. Fazely",
"Anatoli Fedynitch",
"Jacob Feintzeig",
"Tom Feusels",
"Kirill Filimonov",
"Chad Finley",
"Tobias Fischer-Wasels",
"Samuel Flis",
"Anna Franckowiak",
"Katharina Frantzen",
"Tomasz Fuchs",
"Thomas K. Gaisser",
"Joseph S. Gallagher",
"Lisa Marie Gerhardt",
"Laura E. Gladstone",
"Thorsten Glüsenkamp",
"Azriel Goldschmidt",
"Geraldina Golup",
"Javier G. González",
"Jordan A. Goodman",
"Dariusz Góra",
"Dylan T. Grandmont",
"Darren Grant",
"Pavel Gretskov",
"John C. Groh",
"Andreas Groß",
"Chang Hyon Ha",
"Abd Al Karim Haj Ismail",
"Patrick Hallen",
"Allan Hallgren",
"Francis Halzen",
"Kael D. Hanson",
"Dustin Hebecker",
"David Heereman",
"Dirk Heinen",
"Klaus Helbing",
"Robert Eugene Hellauer III",
"Stephanie Virginia Hickford",
"Gary C. Hill",
"Kara D. Hoffman",
"Ruth Hoffmann",
"Andreas Homeier",
"Kotoyo Hoshina",
"Feifei Huang",
"Warren Huelsnitz",
"Per Olof Hulth",
"Klas Hultqvist",
"Shahid Hussain",
"Aya Ishihara",
"Emanuel Jacobi",
"John E. Jacobsen",
"Kai Jagielski",
"George S. Japaridze",
"Kyle Jero",
"Ola Jlelati",
"Basho Kaminsky",
"Alexander Kappes",
"Timo Karg",
"Albrecht Karle",
"Matthew Kauer",
"John Lawrence Kelley",
"Joanna Kiryluk",
"J. Kläs",
"Spencer R. Klein",
"Jan-Hendrik Köhne",
"Georges Kohnen",
"Hermann Kolanoski",
"Lutz Köpke",
"Claudio Kopper",
"Sandro Kopper",
"D. Jason Koskinen",
"Marek Kowalski",
"Mark Krasberg",
"Anna Kriesten",
"Kai Michael Krings",
"Gösta Kroll",
"Jan Kunnen",
"Naoko Kurahashi",
"Takao Kuwabara",
"Mathieu L. M. Labare",
"Hagar Landsman",
"Michael James Larson",
"Mariola Lesiak-Bzdak",
"Martin Leuermann",
"Julia Leute",
"Jan Lünemann",
"Oscar A. Macías-Ramírez",
"James Madsen",
"Giuliano Maggi",
"Reina Maruyama",
"Keiichi Mase",
"Howard S. Matis",
"Frank McNally",
"Kevin James Meagher",
"Martin Merck",
"Gonzalo Merino Arévalo",
"Thomas Meures",
"Sandra Miarecki",
"Eike Middell",
"Natalie Milke",
"John Lester Miller",
"Lars Mohrmann",
"Teresa Montaruli",
"Robert M. Morse",
"Rolf Nahnhauer",
"Uwe Naumann",
"Hans Niederhausen",
"Sarah C. Nowicki",
"David R. Nygren",
"Anna Obertacke",
"Sirin Odrowski",
"Alex Olivas",
"Ahmad Omairat",
"Aongus Starbuck Ó Murchadha",
"Larissa Paul",
"Joshua A. Pepper",
"Carlos Pérez de los Heros",
"Carl Pfendner",
"Damian Pieloth",
"Elisa Pinat",
"Jonas Posselt",
"P. Buford Price",
"Gerald T. Przybylski",
"Melissa Quinnan",
"Leif Rädel",
"Ian Rae",
"Mohamed Rameez",
"Katherine Rawlins",
"Peter Christian Redl",
"René Reimann",
"Elisa Resconi",
"Wolfgang Rhode",
"Mathieu Ribordy",
"Michael Richman",
"Benedikt Riedel",
"J. P. Rodrigues",
"Carsten Rott",
"Tim Ruhe",
"Bakhtiyar Ruzybayev",
"Dirk Ryckbosch",
"Sabine M. Saba",
"Heinz-Georg Sander",
"Juan Marcos Santander",
"Subir Sarkar",
"Kai Schatto",
"Florian Scheriau",
"Torsten Schmidt",
"Martin Schmitz",
"Sebastian Schoenen",
"Sebastian Schöneberg",
"Arne Schönwald",
"Anne Schukraft",
"Lukas Schulte",
"David Schultz",
"Olaf Schulz",
"David Seckel",
"Yolanda Sestayo de la Cerra",
"Surujhdeo Seunarine",
"Rezo Shanidze",
"Chris Sheremata",
"Miles W. E. Smith",
"Dennis Soldin",
"Glenn M. Spiczak",
"Christian Spiering",
"Michael Stamatikos",
"Todor Stanev",
"Nick A. Stanisha",
"Alexander Stasik",
"Thorsten Stezelberger",
"Robert G. Stokstad",
"Achim Stößl",
"Erik A. Strahler",
"Rickard Ström",
"Nora Linn Strotjohann",
"Gregory W. Sullivan",
"Henric Taavola",
"Ignacio Taboada",
"Alessio Tamburro",
"Andreas Tepe",
"Samvel Ter-Antonyan",
"Gordana Tesic",
"Serap Tilav",
"Patrick A. Toale",
"Moriah Natasha Tobin",
"Simona Toscano",
"Maria Tselengidou",
"Elisabeth Unger",
"Marcel Usner",
"Sofia Vallecorsa",
"Nick van Eijndhoven",
"Arne Van Overloop",
"Jakob van Santen",
"Markus Vehring",
"Markus Voge",
"Matthias Vraeghe",
"Christian Walck",
"Tilo Waldenmaier",
"Marius Wallraff",
"Christopher N. Weaver",
"Mark T. Wellons",
"Christopher H. Wendt",
"Stefan Westerhoff",
"Nathan Whitehorn",
"Klaus Wiebe",
"Christopher Wiebusch",
"Dawn R. Williams",
"Henrike Wissing",
"Martin Wolf",
"Terri R. Wood",
"Kurt Woschnagg",
"Donglian Xu",
"Xianwu Xu",
"Juan Pablo Yáñez Garza",
"Gaurang B. Yodh",
"Shigeru Yoshida",
"Pavel Zarzhitsky",
"Jan Ziemann",
"Simon Zierke",
"Marcel Zoll"
],
"abstract": "IceCube is a one-gigaton instrument located at the geographic South Pole, designed to detect cosmic neutrinos, iden- tify the particle nature of dark matter, and study high-energy neutrinos themselves. Simulation of the IceCube detector and processing of data require a significant amount of computational resources. IceProd is a distributed management system based on Python, XML-RPC and GridFTP. It is driven by a central database in order to coordinate and admin- ister production of simulations and processing of data produced by the IceCube detector. IceProd runs as a separate layer on top of other middleware and can take advantage of a variety of computing resources, including grids and batch systems such as CREAM, Condor, and PBS. This is accomplished by a set of dedicated daemons that process job submission in a coordinated fashion through the use of middleware plugins that serve to abstract the details of job submission and job management from the framework.",
"keywords": [],
"raw_extracted_content": "arXiv:1311.5904v3 [cs.DC] 22 Aug 2014The IceProd Framework:\nDistributedData Processingforthe IceCubeNeutrino Obser vatory\nM.G.Aartsenb, R. Abbasiac,M. Ackermannat, J. Adamso,J. A. Aguilarw, M.Ahlersac, D.Altmannv, C. Arguellesac,\nJ. Auffenbergac,X. Baiah,1,M.Bakerac,S. W. Barwicky, V. Baumae,R. Bayg, J.J. Beattyq,r, J.Becker Tjusj,\nK.-H.Beckeras,S. BenZviac,P. Berghausat, D.Berleyp, E.Bernardiniat, A.Bernhardag, D.Z.Bessonaa,G. Binderh,g,\nD. Bindigas, M.Bissoka, E.Blaufussp,J. Blumenthala,D. J.Boersmaar,C. Bohmak, D.Boseam, S. B¨ oserk,\nO.Botnerar,L.Brayeurm,H.-P.Bretzat,A.M.Browno,R.Bruijnz,J.Caseye,M.Casierm,D.Chirkinac,A.Christovw,\nB. Christyp, K.Clarkan, L.Classenv,F. Clevermannt, S.Coendersa,S. Cohenz,D. F. Cowenaq,ap, A.H. CruzSilvaat,\nM.Danningerak,J. Daughheteee, J. C. Davisq,M. Dayac,C. De Clercqm, S. DeRidderx, P.Desiatiac,∗,\nK.D. deVriesm,M. deWithi,T.DeYoungaq, J.C. D´ ıaz-V´ elezac,∗∗, M.Dunkmanaq, R. Eaganaq,B. Eberhardtae,\nB. Eichmannj, J. Eischac,S. Eulera,P. A.Evensonah,O. Fadiranac,∗,A. R.Fazelyf, A.Fedynitchj, J. Feintzeigac,\nT. Feuselsx,K. Filimonovg,C. Finleyak, T.Fischer-Waselsas,S. Flisak,A. Franckowiakk, K.Frantzent, T.Fuchst,\nT.K. Gaisserah,J. Gallagherab, L.Gerhardth,g, L.Gladstoneac, T.Gl¨ usenkampat,A. Goldschmidth,G. Golupm,\nJ. G.Gonzalezah,J. A. Goodmanp, D.G´ orav, D.T. Grandmontu,D. Grantu, P.Gretskova, J. C. Grohaq, A.Großag,\nC. Hah,g,A. Haj Ismailx, P.Hallena,A. Hallgrenar, F. Halzenac,K. Hansonl,D. Hebeckerk, D.Heeremanl,\nD. Heinena,K.Helbingas,R. Hellauerp,S. Hickfordo, G.C. Hillb, K.D. Hoffmanp, R. Hoffmannas,A. Homeierk,\nK.Hoshinaac,F. Huangaq,W. Huelsnitzp, P.O.Hulthak,K. Hultqvistak,S. Hussainah, A.Ishiharan,E. Jacobiat,\nJ.Jacobsenac,K. Jagielskia, G.S. Japaridzed,K. Jeroac,O. Jlelatix, B. Kaminskyat,A. Kappesv,T. Kargat, A.Karleac,\nM.Kauerac, J.L. Kelleyac, J. Kirylukal, J.Kl¨ asas, S.R. Kleinh,g,J.-H.K¨ ohnet,G. Kohnenaf, H.Kolanoskii,\nL.K¨ opkeae, C. Kopperac,S. Kopperas, D.J. Koskinens, M.Kowalskik,M.Krasbergac, A.Kriestena, K.Kringsa,\nG.Krollae, J. Kunnenm, N.Kurahashiac, T.Kuwabaraah,M. Labarex, H.Landsmanac,M. J. Larsonao,\nM.Lesiak-Bzdakal, M.Leuermanna, J. Leuteag,J. L¨ unemannae,O. Mac´ ıaso,J. Madsenaj,G. Maggim,\nR. Maruyamaac,K. Masen,H. S.Matish,F. McNallyac, K.Meagherp, M.Merckac,G.Merinoac,T.Meuresl,\nS. Miareckih,g,E.Middellat, N.Milket, J.Millerm, L.Mohrmannat, T.Montaruliw,2,R. Morseac,R. Nahnhauerat,\nU.Naumannas,H. Niederhausenal, S. C.Nowickiu, D.R. Nygrenh,A.Obertackeas, S. Odrowskiu, A.Olivasp,\nA.Omairatas,A.O’Murchadhal,L.Paula,J. A.Pepperao,C. P´ erezdelosHerosar, C.Pfendnerq,D.Pielotht, E.Pinatl,\nJ. Posseltas,P. B. Priceg,G.T. Przybylskih,M. Quinnanaq,L.R¨ adela, I.Raead,∗, M.Rameezw,K. Rawlinsc,P. Redlp,\nR.Reimanna, E.Resconiag, W. Rhodet, M.Ribordyz, M.Richmanp, B.Riedelac,J. P. Rodriguesac,C. Rottam,\nT. Ruhet, B. Ruzybayevah,D. Ryckboschx, S.M. Sabaj, H.-G.Sanderae,M. Santanderac,S. Sarkars,ai,K. Schattoae,\nF. Scheriaut, T.Schmidtp,M.Schmitzt,S. Schoenena,S. Sch¨ onebergj,A. Sch¨ onwaldat,A. Schukrafta,L.Schultek,\nD.Schultzac,∗,O.Schulzag,D.Seckelah,Y.Sestayoag,S.Seunarineaj,R.Shanidzeat,C.Sherematau,M.W.E.Smithaq,\nD.Soldinas,G. M.Spiczakaj,C. Spieringat,M. Stamatikosq,3,T.Stanevah, N.A. Stanishaaq,A. Stasikk,\nT.Stezelbergerh, R.G. Stokstadh, A.St¨ oßlat,E.A. Strahlerm,R. Str¨ omar,N. L.Strotjohannk,G.W. Sullivanp,\nH. Taavolaar, I.Taboadae, A.Tamburroah,A. Tepeas,S. Ter-Antonyanf,G. Teˇ si´ caq,S. Tilavah,P. A.Toaleao,\nM.N.Tobinac,S. Toscanoac,M.Tselengidouv, E.Ungerj,M.Usnerk,S. Vallecorsaw,N. vanEijndhovenm,\nA.VanOverloopx, J. vanSantenac,M. Vehringa,M.Vogek,M.Vraeghex,C. Walckak, T.Waldenmaieri,M.Wallraffa,\nCh. Weaverac, M.Wellonsac,C. Wendtac,S. Westerhoffac,N. Whitehornac,K. Wiebeae,C. H. Wiebuscha,\nD.R. Williamsao, H.Wissingp, M.Wolfak, T.R. Woodu, K.Woschnaggg,D. L.Xuao, X.W. Xuf, J. P.Yanezat,\nG.Yodhy,S. Yoshidan,P. Zarzhitskyao,J. Ziemannt, S. Zierkea, M.Zollak\naIII. Physikalisches Institut, RWTHAachen University, D-5 2056 Aachen, Germany\nbSchool ofChemistry &Physics, University of Adelaide, Adelaide SA,5005 Austral ia\n∗Corresponding author\n∗∗Principal corresponding author\nEmailaddresses: [email protected] (P.Desiati), [email protected] (J.C. D´ ıaz-V´ elez),\[email protected] (O. Fadiran), [email protected] (I.Rae),[email protected] (D. Schultz)\n1Physics Department, South Dakota School of Mines and Techno logy, Rapid City, SD57701, USA\n2also Sezione INFN,Dipartimento diFisica, I-70126, Bari, I taly\n3NASAGoddard Space Flight Center, Greenbelt, MD 20771, USA\nPreprintsubmitted to Journal ofParallel and Distributed C omputing August26,2014\ncDept. ofPhysics and Astronomy, University ofAlaska Anchor age, 3211 Providence Dr.,Anchorage, AK99508, USA\ndCTSPS,Clark-Atlanta University, Atlanta, GA30314, USA\neSchool ofPhysics and Center for Relativistic Astrophysics , Georgia Institute of Technology, Atlanta, GA30332, USA\nfDept. of Physics, Southern University, Baton Rouge, LA7081 3, USA\ngDept. of Physics, University ofCalifornia, Berkeley, CA 94 720, USA\nhLawrence Berkeley National Laboratory, Berkeley, CA94720 , USA\niInstitut f¨ ur Physik, Humboldt-Universit¨ at zu Berlin, D- 12489 Berlin, Germany\njFakult¨ at f¨ ur Physik &Astronomie, Ruhr-Universit¨ at Bochum,D-44780 Bochum, Ge rmany\nkPhysikalisches Institut, Universit¨ at Bonn, Nussallee 12 ,D-53115 Bonn,Germany\nlUniversit´ e Librede Bruxelles, Science Faculty CP230,B-1 050 Brussels, Belgium\nmVrije Universiteit Brussel, Dienst ELEM,B-1050 Brussels, Belgium\nnDept. ofPhysics, Chiba University, Chiba 263-8522, Japan\noDept. ofPhysics and Astronomy, University ofCanterbury, P rivate Bag 4800, Christchurch, New Zealand\npDept. ofPhysics, University ofMaryland, College Park,MD 2 0742, USA\nqDept. ofPhysics and Center for Cosmology and Astro-Particl e Physics, Ohio State University, Columbus, OH43210, USA\nrDept. of Astronomy, Ohio State University, Columbus, OH432 10, USA\nsNiels Bohr Institute, University ofCopenhagen, DK-2100 Co penhagen, Denmark\ntDept. ofPhysics, TU Dortmund University, D-44221 Dortmund , Germany\nuDept. of Physics, University of Alberta, Edmonton, Alberta , Canada T6G 2E1\nvErlangen Centre for Astroparticle Physics, Friedrich-Ale xander-Universit¨ at Erlangen-N¨ urnberg, D-91058 Erlang en, Germany\nwD´ epartement dephysique nucl´ eaire et corpusculaire, Uni versit´ e de Gen` eve, CH-1211 Gen` eve, Switzerland\nxDept. ofPhysics and Astronomy, University ofGent, B-9000 G ent, Belgium\nyDept. ofPhysics and Astronomy, University ofCalifornia, I rvine, CA92697, USA\nzLaboratory for High EnergyPhysics, ´Ecole Polytechnique F´ ed´ erale, CH-1015 Lausanne, Switze rland\naaDept. ofPhysics and Astronomy, University ofKansas, Lawre nce, KS 66045, USA\nabDept. ofAstronomy, University of Wisconsin, Madison, WI 53 706, USA\nacDept. of Physics and Wisconsin IceCube Particle Astrophysi cs Center, University ofWisconsin, Madison, WI 53706, USA\nadDept. of Computer Science, University ofWisconsin, Madiso n, WI 53706, USA\naeInstitute of Physics, University of Mainz, Staudinger Weg 7 , D-55099 Mainz, Germany\nafUniversit´ e de Mons,7000 Mons, Belgium\nagT.U.Munich, D-85748 Garching, Germany\nahBartol Research Institute and Dept. ofPhysics and Astronom y, University ofDelaware, Newark, DE19716, USA\naiDept. of Physics, University of Oxford, 1Keble Road,Oxford OX1 3NP,UK\najDept. ofPhysics, University ofWisconsin, River Falls, WI 5 4022, USA\nakOskar Klein Centre and Dept. of Physics, Stockholm Universi ty, SE-10691 Stockholm, Sweden\nalDept. ofPhysics and Astronomy, Stony BrookUniversity, Sto ny Brook,NY 11794-3800, USA\namDept. ofPhysics, Sungkyunkwan University, Suwon 440-746, Korea\nanDept. of Physics, University of Toronto, Toronto, Ontario, Canada, M5S 1A7\naoDept. ofPhysics and Astronomy, University ofAlabama, Tusc aloosa, AL35487, USA\napDept. of Astronomyand Astrophysics, Pennsylvania State Un iversity, University Park,PA16802, USA\naqDept. ofPhysics, Pennsylvania State University, Universi ty Park,PA16802, USA\narDept. ofPhysics and Astronomy, Uppsala University, Box516 , S-75120 Uppsala, Sweden\nasDept. ofPhysics, University ofWuppertal, D-42119 Wuppert al, Germany\natDESY,D-15735 Zeuthen, Germany\nAbstract\nIceCube is a one-gigaton instrument located at the geograph ic South Pole, designed to detect cosmic neutrinos,\nidentify the particle nature of dark matter, and study high- energy neutrinos themselves. Simulation of the IceCube\ndetectorandprocessingof data requirea significant amount of computationalresources. This paperpresentsthe first\ndetaileddescriptionofIceProd,alightweightdistribute dmanagementsystemdesignedtomeettheserequirements. It\nisdrivenbyacentraldatabaseinordertomanagemassproduc tionofsimulationsandanalysisofdataproducedbythe\nIceCubedetector. IceProdruns as a separate layer on top of o ther middlewareand can take advantageof a variety of\ncomputingresources,includinggridsandbatchsystemssuc hasCREAM,HTCondor,andPBS.Thisisaccomplished\nby a set of dedicated daemons that process job submission in a coordinated fashion through the use of middleware\npluginsthatservetoabstractthedetailsofjobsubmission andjobmanagementfromtheframework.\nKeywords: Data Management,GridComputing,Monitoring,Distributed Computing\n2\n1. Introduction\nLargeexperimentalcollaborationsoften need to pro-\nduce extensive volumes of computationally intensive\nMonte Carlo simulations and process vast amounts of\ndata. These tasks are usually farmed out to large com-\nputingclustersorgrids. Forsuchlargedatasets,itisim-\nportant to be able to document details associated with\neachtask,suchassoftwareversionsandparameterslike\nthe pseudo-random number generator seeds used for\neachdataset. Individualmembersofsuchcollaborations\nmight have access to modest computational resources\nthat need to be coordinated for production. Such com-\nputationalresourcescouldalso potentiallybe pooledin\nordertoprovideasingle,morepowerful,andmorepro-\nductive system that can be used by the entire collabo-\nration. This article describes the design of a software\npackage meant to address all of these concerns. It pro-\nvidesasimplewaytocoordinateprocessingandstorage\noflargedatasetsbyintegratinggridsandsmallclusters.\n1.1. TheIceCubeDetector\nThe IceCube detector shown in Figure 1 is located\nat the geographic South Pole and was completed at the\nend of 2010 [1, 2]. It consists of 5160 optical sensors\nburied between 1450 and 2450 meters below the sur-\nface of the South Pole ice sheet and is designed to de-\ntectinteractionsofneutrinosofastrophysicalorigin[1] .\nHowever, it is also sensitive to downward-goinghighly\nenergeticmuonsandneutrinosproducedincosmic-ray-\ninducedairshowers. IceCuberecords ∼1010cosmic-ray\nevents per year. The cosmic-ray-induced muons out-\nnumber neutrino-induced events (including ones from\natmospheric origin) by about 500,000:1. They repre-\nsent a background for most IceCube analyses and are\nfiltered prior to transfer to the data processing center\nin the Northern Hemisphere. Filtering at the data col-\nlection source is required because of bandwidth limi-\ntations on the satellite connection between the detector\nand the processing location [3]. About 100 GB of data\nfromtheIceCubedetectoristransferredtothemaindata\nstorage facility daily. In order to facilitate record keep-\ning,thedataisdividedintoruns,andeachrunisfurther\nsubdivided into multiple files. The size of each file is\ndictated by what is considered optimal for storage and\naccess. Each runtypicallyconsistsof hundredsof files,\nresultingin∼400,000filesforeachyearofdetectorop-\neration. Once the data has been transferred, additional,\nmore computationally-intensive event reconstructions\nare performed and the data is filtered to select events\nfor various analyses. The computing requirements forthe various levels of data processing are shown in Ta-\nble 1. In order to develop event reconstructions, per-\nformanalyses,andunderstandsystematicuncertainties,\nphysicists require statistics from Monte Carlo simula-\ntions that are comparable to the data collected by the\ndetector. This requires thousands of years of CPU pro-\ncessingtimeascanbe seenfromTable2.\nTable 1: Data processing demands. Data is filtered on\n400 cores at the South Pole using loose selection cri-\nteria to reduce volume by a factor of 10 before satel-\nlitetransfertotheNorthernHemisphere(Level1). Once\nin the North, more computationally intensive event re-\nconstructions are performed in order to further reduce\nbackground contamination (Level2). Further event se-\nlections are made for each analysis channel (Level3).\nEach run is equivalent to approximately eight hours of\ndetector livetime and the processing time is based on a\n2.8GHz core.\nFilter Processingtime/runTotalperyear\nLevel1 2400h 2.6×106h\nLevel2 9500h 1.0×107h\nLevel3 15h 1.6×104h\nTable 2: Runtime of various Monte Carlo simulations\nof background cosmic-ray shower events and neutrino\nsignal with different energy distributions. The median\nenergyisbasedonthedistributionofeventsthattrigger\nthe detector. The number of events reflects the typical\nper-yearrequirementsforIceCubeanalyses.\nSimulation Med. Energy1t/eventevents\nAir showers 1.2×104GeV 5ms∼1014\nNeutrinos 3.9×106GeV316ms∼108\nNeutrinos 8.1×101GeV53ms∼109\n1.2. IceCubeComputingResources\nThe IceCube collaboration is comprised of 43 re-\nsearch institutionsfromEurope,NorthAmerica,Japan,\nAustralia, and New Zealand. Members of the collab-\noration have access to 25 di fferent computing clusters\nand gridsin Europe, Japan, Canada and the U.S. These\nrange from small computer farms of 30 nodes to large\ngrids, such as the European Grid Infrastructure (EGI),\nSwedish Grid Initiative (SweGrid), Canada’s WestGrid\nand the Open Science Grid (OSG), that may each have\n11 GeV=109electronvolts (unit of energy)\n3\nFigure 1: The IceCube detector: the dotted lines at the botto m represent the instrumented portion of the ice. The\ncirclesonthetopsurfacerepresent IceTop,a surfaceair-showersubdetector.\n4\nthousands of computing nodes. The total number of\nnodes available to IceCube member institutions varies\nwith time since much of our use is opportunistic and\navailability depends on the usage by other projects and\nexperiments. In total, IceCube simulation has run on\nmore than 11,000 distinct multicore computing nodes.\nOn average, IceCube simulation production has run\nconcurrently on∼4,000 cores at any given time since\ndeployment,anditisanticipatedtorunon ∼5,000cores\nsimultaneouslyduringupcomingproductions.\n2. IceProd\nThe IceProd frameworkis a software package devel-\nopedforIceCubewiththegoalofmanagingproductions\nacrossdistributedsystemsandpoolingtogetherisolated\ncomputing resources that are scattered across member\ninstitutionsoftheCollaborationandbeyond. It consists\nof a central database and a set of daemons that are re-\nsponsibleforthemanagementofgridjobsanddatahan-\ndling through the use of existing grid technology and\nnetworkprotocols.\nIceProd makes job scripting easier and sharing pro-\nductions more efficient. In many ways it is similar to\nPANDA Grid, the analysis framework for the PANDA\nexperiment [4], in that both tools are distributed sys-\ntems based on a central database and an interface to\nlocal batch systems. Unlike PANDA Grid which de-\npendsheavilyonAliEn,thegridmiddlewarefortheAL-\nICE experiment [5], and on the ROOT analysis frame-\nwork[6],IceProdwasbuiltin-housewithminimalsoft-\nwarerequirementsandisnotdependentonanyparticu-\nlar middlewareor analysisframework. It is designedto\nruncompletelyin user space with no administrativeac-\ncess, allowinggreaterflexibilityin installation. IcePro d\nalso includes a built-in monitoring system with no de-\npendenciesonanyexternaltoolsforthispurpose. These\npropertiesmakeIceProdaverylightweightyetpowerful\ntoolandgiveitagreaterscopebeyondIceCube-specific\napplications.\nThe software package includes a set of libraries, ex-\necutablesand daemonsthat communicatewith the cen-\ntral database and coordinate to share responsibility for\nthe completion of tasks. The details of job submission\nand management in di fferent grid environmentsare ab-\nstracted through the use of plugin modules that will be\ndiscussedinSection3.2.1.\nIceProd can be used to integrate an arbitrary num-\nber of sites including clusters and grids. It is, however,\nnotareplacementforotherclusterandgridmanagement\ntoolsoranyothermiddleware. Instead,itrunsontopofthese as a separate layer providing additional function-\nality. IceProdfills a gapbetweenthe userorproduction\nmanagerandthepowerfulmiddlewareandbatchsystem\ntoolsavailableoncomputingclustersandgrids.\nMany of the existing middleware tools, including\nCondor-C,GlobusandCREAM,makeitpossibletoin-\nterface any number of computing clusters into a larger\npool. However, most of these tools need to be installed\nand configured by system administrators and, in some\ncases,customizationforgeneralpurposeapplicationsis\nnot feasible. In contrast to most of these applications,\nIceProd runs at the user level and does not require ad-\nministrator privileges. This makes it possible for indi-\nvidual users to build large productionsystems by pool-\ningsmallcomputationalresourcestogether.\nSecurity and data integrity are concerns in any soft-\nware architecture that depends heavily on communi-\ncation through the Internet. IceProd includes features\naimed at minimizingsecurity and data corruptionrisks.\nSecurityanddataintegrityareaddressedinSection3.8.\nTheIceProdclientprovidesagraphicaluserinterface\n(GUI) for configuring simulations and submitting jobs\nthrougha “productionserver.” It providesa methodfor\nrecordingall thesoftwareversions,physicsparameters,\nsystem settings, and other steering parameters associ-\natedwithajobinacentralproductiondatabase. IceProd\nalso includes a web interface for visualization and live\nmonitoringofdatasets. DetailsabouttheGUIclientand\na text-basedclient arediscussedinSection3.5.\n3. DesignElements ofIceProd\nThe IceProd software package can be logically di-\nvided into the following components or software li-\nbraries:\n•iceprod-core —a set of modules and libraries of\ncommonuse throughoutIceProd.\n•iceprod-server —a collection of daemons and li-\nbrariestomanageandschedulejobsubmissionand\nmonitoring.\n•iceprod-modules —a collection of predefined\nclasses that provide an interface between IceProd\nand an arbitrary task to be performed on a\ncomputingnode,asdefinedinSection3.3.\n•iceprod-client —a client (both graphical and text)\nthat can download, edit, and submit dataset steer-\ningfilestobeprocessed.\n5\n•A database that stores configured parameters, li-\nbraries (including version information), job infor-\nmation,andperformancestatistics.\n•A web application for monitoring and controlling\ndatasetprocessing.\nThese componentsare described in furtherdetail in the\nfollowingsections.\n3.1. IceProdCore Package\nTheiceprod-core package contains modules and li-\nbrariescommontoallotherIceProdpackages. Thesein-\ncludeclassesandmethodsforwritingandparsingXML\nfiles and transporting data. The classes that define job\nexecution on a host are contained in this package. The\niceprod-core alsoincludesaninterpreter(Section3.1.3)\nforasimplescriptinglanguagethatprovidessomeflex-\nibilityforparsingXMLsteeringfiles.\n3.1.1. TheJEP\nOne of the complications of operating on heteroge-\nneous systems is the diversity of architectures, operat-\ning systems, and compilers. IceProd uses HTCondor’s\nNMI-Metronomebuild and test system [7] for building\ntheIceCubesoftwareonavarietyofplatformsandstor-\ning the built packages on a server. As part of the man-\nagement of each job, IceProd submits a Job Execution\nPilot (JEP) to the cluster /grid queue. This script deter-\nmineswhat platforma job is runningon and,after con-\ntacting the monitoring server, which software package\nto downloadand execute. Duringruntime, the JEP per-\nforms status updates through the monitoring server via\nremote procedure calls using XML-RPC [8]. This in-\nformation is updated on the database and is displayed\non the monitoring web interface. Upon completion,\nthe JEP removes temporary files and directories cre-\nated for the job. Depending on the configuration, it\nwill also cache a copy of the software used, making\nit available for future JEPs. When caching is enabled,\nanMD5checksumisperformedonthecachedsoftware\nandcomparedtowhatisstoredontheserverinorderto\navoidusingcorruptedoroutdatedsoftware.\nJobs can fail under many circumstances. These fail-\nures include failed submissions due to transient sys-\ntem problems and execution failures due to problems\nwith the execution host. At a higher level, errors spe-\ncific to IceProd include communication problems with\nthe monitoring daemon or the data repository. In order\nto account for possible transient errors, the design of\nIceProdincludesasetofstatesthroughwhichajobwill\ntransitioninordertoguaranteesuccessfulcompletionofWAITING QUEUEING\nRESETQUEUED\nFalsePROCESSINGTrue\nFalse\nok?\nok?True\nMove data to diskFalserequeueok?\nTrue\nCOPIEDERROR\nCLEANING OKSubmit\nMax. time\n reached\nSUSPENDEDCLEANINGStart\nFigure 2: State diagram for the JEP. Each of the non-\nerror states through which a job passes includes a con-\nfigurable timeout. The purposeof this timeout is to ac-\ncount for any communicationerrorsthat may have pre-\nventedajobfromsetting itsstatuscorrectly.\nawell-configuredjob. Thestate diagramforanIceProd\njobisdepictedinFigure2.\n3.1.2. XML JobDescription\nInthecontextofthisdocument,adatasetisdefinedas\na collection of jobs that share a basic set of scripts and\nsoftware but whose input parameters depend on the ID\nof each individual job. A configuration or steering file\ndescribes the tasks to be executed for an entire dataset.\nIceProd steering files are XML documents with a de-\nfined schema. These steering files include information\naboutthespecificsoftwareversionsusedforeachofthe\nsections,knownastrays(atermborrowedfromIceTray,\nthe C++softwareframeworkused by the IceCubeCol-\nlaboration [9]). An IceProd tray represents an instance\nof an environment corresponding to a set of libraries\nand executables and a chain of configurable modules\nwith corresponding parameters and input files needed\nfor the job. In addition, there is a header section for\nuser-defined parameters and expressions that are glob-\nallyaccessible bydi fferentmodules.\n3.1.3. IceProdXMLexpressions\nA limited programming language was developed in\nordertoallowmorescriptingflexibilitythatdependson\nruntime parameters such as job ID, run ID, and dataset\nID. This lightweight, embedded, domain-specific lan-\nguage (DSL) allows for a single XML job description\nto be applied to an entire dataset following an SPMD\n(single process, multiple data) paradigm. It is power-\nful enough to give some flexibility but su fficiently re-\n6\nstrictive to limit abuse. Examples of valid expressions\nincludethefollowing:\n•$args(<var>) —a command line argument\npassedtothejob(suchasjobIDordatasetID).\n•$steering(<var>) —auserdefinedvariable.\n•$system(<var>) —a system-specific parameter\ndefinedbytheserver.\n•$eval(<expr>) —a mathematical or logical ex-\npression(inPython).\n•$sprintf(<format>,<list>) —string format-\nting.\n•$choice(<list>) —random choice of an ele-\nmentfromthelist.\nThe evaluation of such expressions is recursive and\nallowsfor some complexity. However,there are limita-\ntions in place that prevent abuse of this feature. As an\nexample, $eval() statements prohibit such things as\nloops and import statements that would allow the user\nto write an entire program within an expression. There\nis also a limit on the number of recursions in order to\npreventclosedloopsinrecursivestatements.\n3.2. IceProdServer\nTheiceprod-server package is comprised of four\ndaemonsandtheirrespectivelibraries:\n1.soaptray1—an HTTP server that receives client\nXML-RPC requests for scheduling jobs and steer-\ning information which then uploaded to the\ndatabase.\n2.soapqueue —a daemon that queries the database\nfor available tasks to be submitted to a particular\nclusterorgrid. Thisdaemonisalsoresponsiblefor\nsubmitting jobs to the cluster or grid througha set\nofpluginclasses.\n3.soapmon—amonitoringHTTPserverthatreceives\nXML-RPCupdatesfromjobsduringexecutionand\nperformsstatusupdatesto thedatabase.\n4.soapdh—a data handling/garbage collection dae-\nmon that removes temporary files and performs\nanypostprocessingtasks.\n1The prefix soapis used for historical reasons. The original im-\nplementation of IceProd relied on SOAP for remote procedure calls.\nThiswas replaced by XML-RPC which has better support in Pyth on.There are two modes of operation. The first is an un-\nmonitored mode in which jobs are simply sent to the\nqueueofaparticularsystem. Thismodeprovidesatool\nfor scheduling jobs that don’t need to be recorded and\ndoes not require a database. In the second mode, all\nparameters are stored in a database that also tracks the\nprogress of each job. The soapqueue daemon running\nateachoftheparticipatingsitesperiodicallyqueriesthe\ndatabase to check if any tasks have been assigned to it.\nIt then downloads the steering configuration and sub-\nmitsagivennumberofjobsto theclusterorgridwhere\nitisrunning. ThenumberofjobsthatIceProdmaintains\nin the queue at each site can be configuredindividually\naccording to the specifics of each cluster, including the\nsize of the cluster and local queuing policies. Figure3\nisagraphicalrepresentationthatdescribestheinterrela -\ntion of these daemons. The state diagram in Figure4\nillustrates the role of the daemons in dataset submis-\nsion while Figure5 illustrates the flow of information\nthroughthevariousprotocols.\nClientsoaptray\nDatabaseJob\nsoapqueue jobs?True\nFalsesoapmon\nTrueFalse\nmonitor?True\nok?True\nMove data to data warehouseFalse\nrequeuemonitor?\nFigure 4: State diagram of queuing algorithm. The\niceprod-client sends requests to the soaptray server\nwhichthenloadstheinformationtothedatabase(inpro-\nductionmode)ordirectlysubmitsjobstothecluster(in\nunmonitored mode). The soapqueue daemons periodi-\ncallyquerythedatabaseforpendingrequestsandhandle\njobsubmissionin thelocalcluster.\n3.2.1. IceProdServerPlugins\nIn order to abstract the process of job submission\nfrom the framework for the various types of systems,\nIceProddefinesaGridbaseclassthatprovidesaninter-\nface for queuingjobs. The Grid base class interface in-\ncludesa set ofmethodsforqueuingandremovingjobs,\nperformingstatus checks, and setting attributes such as\njobpriorityandmaximumallowedwalltimeandjobre-\nquirementssuch as disk space and memoryusage. The\n7\nFigure 3: Network diagram of IceProd system. The IceProd cli ents and JEPs communicate with iceprod-server\nmodulesviaXML-RPC.Databasecallsarerestrictedto iceprod-server modules. Queueingdaemonscalled soapqueue\nare installed at each site and periodically query the databa se for pending job requests. The soapmon server receives\nmonitoring update from the jobs. An instance of soapdhhandles garbage collection and any post processing tasks\nafterjobcompletion.\n*Condor, PBS, SGE, CREAM, GLite client soaptray soapqueue batch \nsystem running \njob soapmon \nMySQL XML-RPC \nbatch system* \nsubmit cmd batch system \n protocol* \nXML-RPC soapdh \nMySQL \nMySQL batch system submit cmd* \n(unmonitored) \nenqueue dataset submit \nsubmit job \nXML-RPC \ncheck status \nbatch system protocol*/shell schedule job on \ncluster \nstatus update \nstatus update \nstatus update \nremove completed job and clean up files submit jobs \nremove failed job \nand clean up files batch system \nprotocol*/shell \nFigure 5: Data flow for job submission, monitoringand remova l. Communicationbetween server instances (labeled\n“soap*”)is handledthrougha database. Client /servercommunicationandmonitoringupdatesare handledvi a XML-\nRPC. Interaction with the grid or cluster is handled through a set of plugin modules and depends on the specifics of\nthesystem.\nsetofmethodsdefinedbythisbaseclassincludebutare notli mitedto:\n8\n•WriteConfig : write protocol-specific submission\nscripts (i.e., a JDL job description file in the case\nof CREAM or gLite or a shell script with the ap-\npropriatePBS/SGE headers).\n•Submit: submit jobs and record the job ID in the\nlocalqueue.\n•CheckJobStatus : queryjobstatusfromthequeue.\n•Remove: cancel/abortajob.\n•CleanQ:removeanyorphanjobsthatmightbeleft\ninthequeue.\nTheactual implementationof these methodsis doneby\naset ofpluginsubclassesthatlaunchthecorresponding\ncommands or library calls, as the case may be. In the\ncase of PBS and SGE, most of these methods result in\ntheappropriatesystemcallsto qsub,qstat,qdel,etc. For\nothersystems,thesecanbedirectlibrarycallsthrougha\nPythonAPI.IceProdcontainsagrowinglibraryofplug-\nins,includingclassesforinterfacingwithbatchsystems\nsuch as HTCondor, PBS and SGE as well as grid sys-\ntems like Globus, gLite, EDG, CREAM and ARC. In\naddition,onecaneasilyimplementuser-definedplugins\nfor any new type of system that is not included in this\nlist.\n3.3. IceProdModules\nTheiceprod-modules package is a collection of con-\nfigurablemoduleswith acommoninterface. Theserep-\nresent the atomic tasks to be performed as part of the\njob. They are derived from a base class IPModule and\nprovidea standard interface that allows for an arbitrary\nset of parameters to be configured in the XML docu-\nment and passed from the IceProd framework. In turn,\nthe module returns a set of statistics in the form of a\nstring-to-float dictionary back to the framework so that\nit can be recorded in the database and displayed on the\nmonitoring web interface. By default, the base class\nwill report the module’s CPU usage, but the user can\ndefine any set of values to be reported, such as number\nofeventsthatpassagivenprocessingfilter. IceProdalso\nincludesalibraryofpredefinedmodulesforperforming\ncommon tasks such as file transfers through GridFTP,\ntarballmanipulation,etc.\n3.4. ExternalIceProdModules\nIncludedinthelibraryofpredefinedmodulesisaspe-\ncial module that has two parameters: classandURL.\nThe first is a string that defines the name of an external\nIceProd module and the second specifies a URL for a(preferablyversion-controlled)repositorywheretheex-\nternalmodulecodecanbefound. Anyotherparameters\npassed to this module are assumed to belong to the re-\nferredexternalmoduleandwillbeignored. Thisallows\nfor the use of user-defined modules without the need\nto install them at each IceProd site. External modules\nshare the same interface as any other IceProd module.\nExternalmodulesareretrievedandcachedbytheserver\nat the time of submission. These modules are then in-\ncludedasfiledependenciesforthejobs,thuspreventing\nthe need for jobs to directly access the file code reposi-\ntory. Additional precautions, such as enforcing the use\nof secure protocols for URLs, must be taken to avoid\nsecurityrisks.\n3.5. IceProdClient\nTheiceprod-client packagecontainstwoapplications\nfor interacting with the server and submitting datasets.\nOneisaPyGTK-basedGUI(seeFigure6)andtheother\nis a text-based application that can run as a command-\nline executable or as an interactive shell. Both of\nthese applicationsallow the user to download,edit, and\nsubmit steering configuration files as well as control\ndatasets running on the IceProd-controlled grid. The\ngraphical interface includes drag and drop features for\nmoving modules around and provides the user with a\nlist of valid parameters for known modules. Informa-\ntion about parameters for external modules is not in-\ncluded since these are not known a priori. The interac-\ntive shell also allows the user to perform grid manage-\nmenttaskssuchasstartingandstoppingaremoteserver\nand addingand removingproductionsites participating\nintheprocessingofadataset. Theusercanalsoperform\njob-specific actions such as suspension and resetting of\njobs.\n3.6. Database\nAt the time of this writing, the current implemen-\ntation of IceProd works exclusively with a MySQL\ndatabase, but all database calls are handled by a\ndatabase module that abstracts queries from the frame-\nwork and could be easily replaced by a di fferent rela-\ntional database. This section describes the relational\nstructureoftheIceProddatabase.\nEach dataset is defined by a set of modules and pa-\nrameters that operate on separate data (single process,\nmultiple data). At the top level of the database struc-\nture is the dataset table. The dataset ID is the unique\nidentifier for each dataset, though it is possible to as-\nsign a mnemonicstring alias. The tables in the IceProd\ndatabase are logically divided into two distinct classes\n9\nFigure 6: The iceprod-client uses pyGtk and provides a graphical user interface to IcePro d. It is both a graphical\neditorofXMLsteeringfilesandanXML-RPCclientfordataset submission.\nthat could in principle be entirely di fferent databases.\nThe first describes a steering file or dataset configura-\ntion (items 1–6 and 9 in the list below) and the second\nis a job-monitoringdatabase (items 7 and 8). The most\nimportanttablesaredescribedbelow.\n1.dataset: contains a unique identifier as well as at-\ntributes to describe and categorize the dataset, in-\ncludinga textualdescription.\n2.steering-parameter : describes general global\nvariablesthatcanbereferencedfromanymodule.\n3.meta-project : describes a software environment\nincludinglibrariesandexecutables.\n4.tray: describes a grouping of modules that will\nexecute given the same software environment or\nmetaproject .\n5.module: specifies an instance of an IceProd Mod-\nuleclass.\n6.cparameter : contains all the configured parame-\ntersassociatedwith amodule.\n7.job: describes each job in the queue related to a\ndataset, including the state and host where the job\nisexecuted.8.task: keepstrackofthestateofataskinawaysim-\nilar to what is donein the jobstable. A task repre-\nsentsa subprocessfora jobin aprocessworkflow.\nMoredetailsonthiswill beprovidedin Section4.\n9.task-rel: describes the hierarchical relationship\nbetweentasks.\n3.7. Monitoring\nThe status updates and statistics are reported by\nthe JEP via XML-RPC to soapmon and stored in the\ndatabase, and provide useful information for monitor-\ningtheprogressofprocessingdatasetsandfordetecting\nerrors. The updates include status changes and infor-\nmationabouttheexecutionhostaswellasjobstatistics.\nThis is a multi-threaded server that can run as a stand-\nalone daemon or as a CGI script within a more robust\nweb server. The data collected from each job are made\navailableforanalysis,andpatternscanbedetectedwith\ntheaid ofvisualizationtoolsasdescribedinthe follow-\ningsection.\n3.7.1. Web Interface\nThe current web interface for IceProd was designed\nto work independently of the IceProd framework but\n10\nFigure 7: A screen capture of the web interface that allows th e monitoringof ongoing jobs and datasets. The moni-\ntoring web interface has a numberof views with di fferent levels of detail. The view shown displays the job progr ess\nfor active jobs within a dataset. The web interface provides authenticated users with buttons to control datasets and\nindividualjobs.\nutilizes the same database. It is written in PHP and\nmakes use of the CodeIgniter framework [10]. Each\nof the simulation and data-processing web-monitoring\ntools provide different views, which include, from top\nleveldownward:\n•general view: displays all datasets filtered by sta-\ntus,type,grid,etc.\n•gridview: showsall datasetsrunningonaparticu-\nlarsite.\n•dataset view: displays all jobs and accompanying\nstatistics for a given dataset, including every site\nthatit isrunningon.\n•jobview: showseachindividualjob,includingthe\nstatus, job statistics, execution host, and possible\nerrors.\nTherearesomeadditionalviewsthatareapplicableonly\ntotheprocessingofrealIceCubedetectordata:\n•calendar view: displays a calendar with a color\ncoding that indicates the status of jobs associated\nwithdatatakenonaparticulardate.\n•day view: shows the status of jobs associated with\nagivencalendardayofdatataking.•run view: displays the status of jobs associated\nwitha particulardetectorrun.\nThe web interface also provides the functionality to\ncontroljobsanddatasetsbyauthenticatedusers. Thisis\ndonebysendingcommandstothe soaptraydaemonus-\ningtheXML-RPCprotocol. Otherfeaturesoftheinter-\nface include graphs displaying completion rates, errors\nand number of jobs in various states. Figure 7 shows\na screen capture of one of a number of views from the\nwebinterface.\n3.7.2. StatisticalData\nOne aspect of IceProd that is not found in most grid\nmiddlewareisthebuilt-incollectionofuser-definedsta-\ntisticaldata. EachIPModuleinstanceispassedastring-\nto-float dictionary to which the JEP can add entries or\nincrement a given value. IceProd collects these data in\nthe central database and displays them on the monitor-\ning page. Statistics are reported individually for each\njob and collectively for the whole dataset as a sum, av-\nerageandstandarddeviation. Thetypicaltypesofinfor-\nmation collected on IceCube jobs include CPU usage,\nnumber of events meeting predefined physics criteria,\nandnumberofcallstoa particularmodule.\n11\n3.8. SecurityandDataIntegrity\nWhen dealing with network applications, one must\nalways be concernedwith security and data integrity in\nordertoavoidcompromisingprivacyandthevalidityof\nscientific results. Some e ffort has been made to min-\nimize security risks in the design and implementation\nof IceProd. This section will summarize the most sig-\nnificant of these. Figure 3 shows the various types of\nnetworkcommunicationbetweenthe client, server, and\nworkernode.\n3.8.1. Authentication\nAuthentication in IceProd can be handled in two\nways: IceProd can authenticate dataset submission\nagainst an LDAP server or, if one is not available, au-\nthentication is handled by means of direct database au-\nthentication. LDAP authentication allows the IceProd\nadministrator to restrict usage to individual users that\nare responsible for job submissions and are account-\nable for improper use so direct database authentica-\ntion should be disabled whenever LDAP is available.\nThissetupalsoprecludestheneedtodistributedatabase\npasswords and thus prevents users from being able to\ndirectlyquerythedatabaseviaaMySQLclient.\nWhen dealing with databases, one also needs to be\nconcerned about allowing direct access to the database\nandpassinglogincredentialsto jobsrunningonremote\nsites. For this reason, all monitoring calls are done via\nXML-RPC, and the only direct queries are performed\nbytheserver,whichtypicallyoperatesbehindafirewall\non a trusted system. The current web interface does\nmake direct queries to the database; a dedicated read-\nonlyaccountisusedforthispurpose.\n3.8.2. Encryption\nBothsoaptrayandsoapmon canbeconfiguredtouse\nSSLcertificatesinordertoencryptalldatacommunica-\ntion between client and server. The encryption is done\nbytheHTTPSserverwitheitheraself-signedcertificate\nor,preferably,withacertificatesignedbyatrustedCer-\ntificateAuthority(CA).Thisisrecommendedforclient-\nserver communicationfor soaptray but is generally not\nconsiderednecessaryformonitoringinformationsentto\nsoapmon by the JEP as this is not considered sensitive\nenough to justify the additional system CPU resources\nrequiredforencryption.\n3.8.3. DataIntegrity\nIn order to guarantee data integrity, an MD5 check-\nsum or digest is generated for each file that is trans-\nmitted. This information is stored in the database andis checked against the file after transfer. IceProd data\ntransferssupportseveralprotocols,butthepreferenceis\nto rely primarily on GridFTP, which makes use of GSI\nauthentication[11,12].\nAn additional security measure is the use of a tem-\nporary passkey that is assigned to each job at the time\nof submission. This passkey is used for authenticat-\ningcommunicationbetweenthejobandthemonitoring\nserver and is only valid during the duration of the job.\nIf the job is reset, this passkey will be changed before\na new job is submitted. This prevents stale jobs that\nmight be left running from making monitoring updates\nafterthejobhasbeenreassigned.\n4. IntrajobParallelism\nAs described in Section 3.1.2, a single IceProd job\nconsists of a number of trays and modules that exe-\ncute different parts of the job, for example, a simula-\ntion chain. These trays and modules describe a work-\nflow with a set of interdependencies, where the output\nfromsomemodulesandtraysisusedasinputtoothers.\nInitial versionsof IceProdranjobssolely asmonolithic\nscripts that executed these modules serially on a single\nmachine. This approach was not very e fficient because\nit did not take advantage of the workflow structure im-\nplicitin thejobdescription.\nTo address this issue, IceProd includes a representa-\ntionofajobasadirected,acyclicgraph(DAG)oftasks.\nJobsarerecharacterizedasgroupsofarbitrarytasksand\nmodulesthataredefinedbyusersina job’sXMLsteer-\ning file, and each task can depend on any number of\nother tasks in the job. This workflow is encoded in a\nDAG, where each vertexrepresentsa single instance of\na task to be executed on a computing node, and edges\nin the graph indicate dependencies between tasks (see\nFigures 8 and 9). DAG jobs on the cluster are exe-\ncuted by means of the HTCondor DAGMan which is\naworkflowmanagerdevelopedbytheHTCondorgroup\nat the University of Wisconsin–Madison and included\nwiththe HTCondorbatchsystem[13].\nFor IceCube simulation production, IceProd has uti-\nlized the DAG support in two specific cases: improv-\ning task-level parallelism and running jobs that utilize\ngraphics processing units (GPUs) for portions of their\nprocessing.\n4.1. Task-levelParallelism\nIn addition to problemscaused by coarse-grainedre-\nquirements specifications, monolithic jobs also under-\nutilizeclusterresources. AsshowninFigure8,portions\n12\n!\"#$%&'()* \n+,- \n*./.#/'&01 *./.#/'&02 !\"#$%&'()* \n+,- \n*./.#/'&01 *./.#/'&02 !\"#$%&'()* \n+,- \n*./.#/'&01 *./.#/'&02 !\"#$%&'()* \n+,- \n*./.#/'&01 *./.#/'&02 !\"#$%&'()* \n+,- \n*./.#/'&01 *./.#/'&02 \n*\"/\"0345/.&0! *\"/\"0345/.&0\"64%)\"5 64%)\"5 64%)\"5 64%)\"5 64%)\"5\n%\"&!\"%.0#'55.#/4') \nFigure 9: A more complicatedDAG in IceProd with multiple inp uts and multiple outputsthat are eventuallymerged\nintoasingleoutput. Theverticesinthesecondlevelrunonc omputingnodesequippedwithGPUs.\nbackground\nGPU\ndetector A detector B\ngarbage collectionsignal\nFigure 8: A simple DAG in IceProd. This DAG corre-\nsponds to a typical IceCube simulation. The two root\nvertices require standard computing hardware and pro-\nducedifferenttypesofsignal. Theiroutputisthencom-\nbined and processed on GPUs. The output is then used\nasinputfortwodifferentdetectorsimulations.\nofthe workflowwithina job are independent;however,\nif a job is monolithic, these portions will be run seri-\nally instead of in parallel. Therefore, although the en-\ntire simulation can be parallelized by submitting multi-\nplejobstodifferentmachines,thisopportunityforaddi-\ntionalparallelismisnotexploitedbymonolithicjobs.\nSupport for breaking a job into discrete tasks is now\nincluded in the HTCondor IceProd plugin as described\nabove,andsimilarfeatureshavebeendevelopedforthePBS and Sun Grid Engine plugins. This enables faster\nexecution of individual jobs by utilizing more comput-\ning nodes; however, one limitation of this implementa-\ntion is that DAG jobs are restricted to a specific type\nof cluster, and DAG jobs cannot distribute tasks across\nmultiplesites.\n4.2. DAGsBasedonSystem Requirements\nIndividual parts of a job may have di fferent system\nhardwareandsoftwarerequirements. Breakingtheseup\ninto tasks that run on separate nodes allows for better\nutilization of resources. The IceCube detector simula-\ntion chain is a good example of this scenario in which\ntasks are distributed across computing nodes with dif-\nferenthardwareresources.\nLight propagation in the instrumented volume of ice\nat the South Pole is di fficult to model, but recent devel-\nopments in IceCube’s simulation include a much faster\napproach for simulating direct propagation of photons\nin theopticallycomplexAntarcticice [14, 15] byusing\ngeneral-purpose GPUs. This new simulation module\nis much faster than a CPU-based implementation and\nmore accurate than using parametrization tables [16],\nbut the rest of the simulation requires standard CPUs.\nWhenexecutinganIceProdjobmonolithically,onlyone\nsetofclusterrequirementscanbeappliedwhenitissub-\nmitted to the cluster. Accordingly,if anypartof the job\nrequires use of a GPU, the entire monolithic job must\nbe scheduled on a cluster machine with the appropriate\nhardware.\nAsofthiswriting,IceCubehasthepotentialtoaccess\n∼20,000 CPU cores distributed throughout the world,\nbut only a small number of these nodes are equipped\nwith GPU cards. Because the simulation is primarily\n13\nCPUbound,thepoolofGPU-equippednodesisnotsuf-\nficienttorunallsimulationjobsinanacceptableamount\noftime. Additionally,thiswouldbeanine fficientuseof\nresources,sinceexecutingtheCPU-orientedportionsof\nmonolithicjobswouldleavetheGPUidleforperiodsof\ntime. Inordertosolvethisproblem,themodulardesign\nof the IceCube simulation design is used to divide the\nCPU- and GPU-oriented portions of jobs into separate\ntasks in a DAG. Since each task in a DAG is submit-\nted separately to the cluster, their requirements can be\nspecified independentlyand CPU-oriented tasks can be\nexecuted on general-purpose grid nodes while photon\npropagationtaskscanbeexecutedonGPU-enabledma-\nchines,asdepictedin Figure9.\n5. Applications\nIceProd’s highly configurable nature lets it serve the\nneeds of many different applications, both inside and\nbeyondtheIceCubeCollaboration.\n5.1. IceCubeSimulationProduction\nTheIceCubesimulationsarebasedonamodularsoft-\nware framework called IceTrayin which modules are\nexecuted in sequential order. Data is passed between\nmodulesin the form of a “frame” object. IceCube sim-\nulation modules represent di fferent steps in the gener-\nation and propagation of particles, in-ice light prop-\nagation, signal detection, and simulation of the elec-\ntronics and data acquisition hardware. These modules\nare “chained” together in a single IceTrayinstance but\ncanalsobebrokenintoseparateinstancesconfiguredto\nwrite intermediate data files. This allows for breaking\nup the simulation chain into multiple IceProd tasks in\norder to optimize the use of resources as described in\nSection4.\nFor IceCube, Monte Carlo simulations are the most\ncomputationally intensive task, which is dominated by\nthe production of background cosmic-ray showers (see\nTable 2). A typical MonteCarlo simulationlasts onthe\norderof8hoursbutcorrespondstoonlyfoursecondsof\ndetector livetime. In order to generate su fficient statis-\ntics, IceCube simulation production needs to make use\nof available computing resources which are distributed\nacross the world. Table 3 lists all of the sites that have\nparticipatedin MonteCarloproduction.\n5.2. Off-lineProcessingoftheIceCubeDetectorData\nIceProd was designed primarily for managing the\nproduction of Monte Carlo simulations for IceCube,Table3: SitesparticipatinginIceCubeMonteCarlopro-\nductionbycountry.\nCountry Queue Type No. of Sites\nSweden ARC 2\nCanada PBS 2\nGermany SGE 1\nPBS 3\nCREAM 4\nBelgium PBS 2\nUSA HTCondor 4\nPBS 3\nSGE 4\nJapan HTCondor 1\nbut it has also been successfully adopted for manag-\ning the processing and reconstruction of experimental\ndata collected by the detector. This data collected by\nIceCube and previously described in Section 1.1 must\nundergomultiplestepsofprocessing,includingcalibra-\ntion, multiple-event track reconstructions, and sorting\ninto various analysis channels based on predefined cri-\nteria. IceProd has proved to be an ideal framework for\nprocessingthislargevolumeofdata.\nFor off-line data processing, the existing features in\nIceProd are used for job submission, monitoring, data\ntransfer, verification, and error handling. However, in\ncontrast to a Monte Carlo productiondataset where the\nnumber of jobs are defined a priori, a configuration for\noff-lineprocessingofexperimentaldatainitiateswithan\nemptydatasetofzerojobs. Aseparatescriptisthenrun\noverthedatainordertomapajobtoaparticularfile(or\ngroupoffiles)andtogenerateMD5checksumsforeach\ninputfile.\nAdditionalminormodificationswereneededinorder\ntosupportthe desiredfeaturesin o ff-lineprocessing. In\naddition to the tables described in section 3.6, a runta-\nblewascreatedtokeeprecordsofrunsanddatesassoci-\nated with each file and uniqueto the data storage struc-\nture. All data collected during a season(or a one year\ncycle)areprocessedasa singleIceProddataset. Thisis\nbecause,foreachIceCubeseason,all the datacollected\nis processed with the same set of scripts, thus follow-\ning the SPMD model. A job for such a dataset consists\nof all the tasks needed to complete the processing of a\nsingledatafile.\nOff-line processing takes advantage of the IceProd\nbuilt-in system for collecting statistics in order to pro-\nvide informationthroughweb interface about the num-\nberofeventsthatpassdi fferentqualityselectioncriteria\nfrom completed jobs. Troubleshooting and error cor-\n14\nrection of jobs during processing is also facilitated by\nIceProd’sreal-timefeedbacksystem accessiblethrough\nthewebinterface. Thedataintegritychecksdiscussedin\nSection 3.8.3 also providea convenientway to validate\ndatawrittentostorageandtocheckforerrorsduringthe\nfiletransfertask.\n5.3. Off-line Event Reconstruction for the HAWC\nGamma-RayObservatory\nIceProd’s scope is not limited to IceCube. Its design\nisgeneralenoughtobeusedforotherapplications. The\nHigh-Altitude Water Cherenkov (HAWC) Observatory\n[17] has recently begun using IceProd for its own o ff-\nlineeventreconstructionanddatatransfer[18]. HAWC\nhastwomaincomputingcenters,onelocatedattheUni-\nversity of Marylandand one at UNAM in MexicoCity.\nData is collected from the detector in Mexico and then\nreplicatedtoUMD.TheeventreconstructionforHAWC\nissimilarinnaturetoIceCube’sdataprocessing. Unlike\nIceCube’s Monte Carlo production, it is I /O bound and\nbetter suited for a local cluster rather than a distributed\ngrid environment. The HAWC Collaboration has made\nimportant contributions to the development of IceProd\nand maintained active collaboration with the develop-\nmentteam.\n5.4. DeployinganIceProdSite\nDeploymentofanIceProdinstanceisrelativelyeasy.\nInstallationofthesoftwarepackagesishandledthrough\nPython’s built-in Module Distribution Utilities pack-\nage. If the intent is to create a stand-alone instance\nor to start a new grid, the software distribution also in-\ncludesscriptsthatdefinetheMySQLtablesrequiredfor\nIceProd.\nAfter the software is installed, the server needs to be\nconfiguredthroughanINI-style file. Thisconfiguration\nfile contains three main sections: general queueing op-\ntions, site-specific system parameters, and job environ-\nment. The queueingoptionsare used by the server plu-\ngintohelpconfiguresubmission(e.g. selectingaqueue\nor passing custom directives to the queueing system).\nSystemparameterscanbeusedtodefinethelocationof\nadownloaddirectoryonasharedfilesystemorascratch\ndirectoryto write temporaryfiles. The jobenvironment\ncan be modified by the server configuration to modify\npathsappropriatelyorset otherenvironmentvariables.\nIfthetypeof grid/batchsystem forthenewsite isal-\nreadysupported,theIceProdinstancecanbeconfigured\ntouseanexistingserverplugin,withtheappropriatelo-\ncal queuing options. Otherwise, the server plugin must\nbewritten,asdescribedin Section3.2.1.5.5. ExtendingFunctionality\nThe ease of adaptation of the framework for the ap-\nplications discussed in Sections 5.2 and 5.3 illustrates\nhow IceProd can be ported to other projects with min-\nimal customization, which is facilitated by its Python\ncodebase.\nThere are a couple of simple ways in which func-\ntionality can be extended: One is through the imple-\nmentation of additional IceProd Modules as described\nin Section 3.3. Another is by adding XML-RPC meth-\nods to the soapmon module in order to provide a way\nfor jobs to communicate with the server. There are, of\ncourse, more intrusive ways of extending functionality,\nbut those require a greater familiarity with the frame-\nwork.\n6. Performance\nSince its initial deployment in 2006, the IceProd\nframework has been instrumental in generating Monte\nCarlo simulations for the IceCube collaboration. The\nIceCubeMonteCarloproductionhasutilizedmorethan\nthreethousandCPU-corehoursdistributedbetweencol-\nlaborating institutions at an increasing rate and pro-\nduced nearly two petabytes of data distributed between\nthetwoprincipalstoragesitesintheU.S.andGermany.\nFigure 10 shows the relative share of CPU resources\ncontributed towards simulation production. The Ice-\nCubeIceProdgridhasgrownfrom8sitesto25overthe\nyears and incorporated new computing resources. In-\ncorporatingnewsitesistrivialsinceeachsetofdaemons\nacts as a volunteer that operates opportunistically on a\nset of job/tasks independent of other sites. There is no\ncentral manager that needs to scale with the number of\ncomputingsites. Thecentraldatabaseistheonecompo-\nnent that doesneed to scale up and can also be a single\npoint of failure. Plans to address this weakness will be\ndiscussedinSection7.\nThe IceProd framework has also been successfully\nused for the off-line processing of data collected from\nthe IceCube detector over a 4-year period beginning in\nthe Spring of 2010. This corresponds to 500 terabytes\nofdataandover3×1011eventreconstructions. Table4\nsummarizesthe resourcesutilized by IceProdfor simu-\nlationproductionando ff-lineprocessing.\n7. FutureWork\nDevelopment of IceProd is an ongoing e ffort. One\nimportantareaofcurrentdevelopmentistheimplemen-\ntation of workflow management capabilities like HT-\nCondor’s DAGMan but in a way that is independent of\n15\nFigure10: ShareofCPUresourcescontributedbymem-\nbers of the IceCube Collaboration towards simulation\nproduction. The relative contributions are integrated\noverthe lifetimeofthe experiment. Thesize ofthesec-\ntorreflectsboththesize ofthe poolandhow longa site\nhasparticipatedinsimulationproduction.\nTable 4: IceCube simulation production and o ff-line\nprocessingresourceutilization. Theproductionratehas\nsteadily increased since initial deployment. The num-\nbers reflect utilization of owned computing resources\nandopportunisticones.\nSimulation Off-line\nComputingcenters 25 1\nCPU-coretime ∼3000yr∼160yr\nCPU-cores ∼45000 2000\nNo. ofdatasets 2421 5\nNo. ofjobs 1.6×1071.5×106\nNo. oftasks 2.3×1071.5×106\nDatavolume 1.2PB 0.5PBany batch system in order to optimize the use of spe-\ncialized hardware and network topologies by running\ndifferentjobsubtasksondi fferentnodes.\nWork is also ongoing on a second generation of\nIceProd designed to be more robust and flexible. The\ndatabase will be partially distributed to prevent it from\nbeingasinglepointoffailureandtobetterhandlehigher\nloads. Caching of files will be more prevalent and eas-\nier to implement to optimize bandwidth usage. The\nJEP will be made more versatile by executing ordinary\nscriptsinadditiontomodules. Taskswillbecomeafun-\ndamental part of the design rather than an added fea-\ntureandwillthereforebefullysupportedthroughoutthe\nframework. Improvementsin the new design are based\nonlessonslearnedfromthefirstgenerationIceProdand\nprovideabetterfoundationonwhichtocontinuedevel-\nopment.\n8. Conclusions\nIceProd has proven to be very successful for man-\naging IceCube simulation productionand data process-\ningacrossa heterogeneouscollectionofindividualgrid\nsitesandbatchcomputingclusters.\nWith few software dependencies, IceProd can be de-\nployed and administered with little e ffort. It makes use\nof existing trusted grid technology and network proto-\ncols,whichhelptominimizesecurityanddataintegrity\nconcernsthatarecommontoanysoftwarethatdepends\nheavilyoncommunicationthroughtheInternet.\nTwo important features in the design of this frame-\nwork are the iceprod-modules andiceprod-server plu-\ngins, which allow users to easily extend the function-\nality of the code. The former provide an interface be-\ntween the IceProd framework and user scripts and ap-\nplications. The latter providean interface that abstracts\nthe details of job submission and management in dif-\nferent grid environments from the framework. IceProd\ncontains a growinglibrary of plugins that support most\nmajorgridandbatchsystem protocols.\nThoughitwasoriginallydevelopedformanagingIce-\nCube simulationproduction,IceProdis general enough\nfor many types of grid applications and there are plans\nto make it generally available to the scientific commu-\nnityinthenearfuture.\nAcknowledgements\nWe acknowledge the support from the following\nagencies: U.S. National Science Foundation-O ffice of\n16\nPolar Programs, U.S. National Science Foundation-\nPhysics Division, University of Wisconsin Alumni Re-\nsearch Foundation, the Grid Laboratory Of Wiscon-\nsin (GLOW) grid infrastructure at the University of\nWisconsin–Madison, the Open Science Grid (OSG)\ngrid infrastructure; U.S. Department of Energy, and\nNational Energy Research Scientific Computing Cen-\nter, the Louisiana Optical Network Initiative (LONI)\ngrid computing resources; Natural Sciences and En-\ngineering Research Council of Canada, WestGrid and\nCompute/Calcul Canada; Swedish Research Council,\nSwedish Polar Research Secretariat, Swedish National\nInfrastructure for Computing (SNIC), and Knut and\nAlice Wallenberg Foundation, Sweden; German Min-\nistry for Education and Research (BMBF), Deutsche\nForschungsgemeinschaft (DFG), Helmholtz Alliance\nfor Astroparticle Physics (HAP), Research Department\nof Plasmas with Complex Interactions (Bochum), Ger-\nmany; Fund for Scientific Research (FNRS-FWO),\nFWO Odysseus programme, Flanders Institute to en-\ncouragescientificandtechnologicalresearchinindustry\n(IWT),BelgianFederalSciencePolicyO ffice(Belspo);\nUniversityofOxford,UnitedKingdom;MarsdenFund,\nNew Zealand; Australian Research Council; Japan So-\nciety for Promotion of Science (JSPS); the Swiss Na-\ntional Science Foundation (SNSF), Switzerland; Na-\ntional Research Foundation of Korea (NRF); Danish\nNational Research Foundation, Denmark (DNRF) The\nauthorswouldliketoalsothankT.Weisgarberfromthe\nHAWC collaboration for his contributions to IceProd\ndevelopment.\nAppendix\nThe following is a comprehensive list of sites\nparticipating in IceCube Monte Carlo production:\nUppsala University (SweGrid), Stockholm Univer-\nsity (SweGrid), University of Alberta (WestGrid),\nTU Dortmund (PHiDO, LIDO), Ruhr-Uni Bochum\n(LiDO), University of Mainz, Universit´ e Libre de\nBruxelles/Vrije Universiteit Brussel, Universiteit Gent\n(Trillian) Southern University (LONI), Pennsylvania\nState University (LIONX), University of Wisconsin\n(CHTC, GLOW, NPX4), Open Science Grid, RWTH\nAachenUniversity(EGI),Universit¨ atDortmund(EGI),\nDeutsches Elektronen-Synchrotron(EGI, DESY), Uni-\nversit¨ at Wuppertal (EGI), University of Delaware,\nLawrence Berkeley National Laboratory(PDSF, Dirac,\nCarver),UniversityofMaryland.References\n[1] F. Halzen, IceCube A Kilometer-Scale Neutrino Observat ory at\nthe South Pole, IAU XXV General Assembly, ASP Conference\nSeries 13 (2003) 13–16.\n[2] M. G. Aartsen, et al., Search for Galactic PeV gamma rays\nwith theIceCube Neutrino Observatory, Phys.Rev.D87(2013 )\n62002.\n[3] F. Halzen, S. R. Klein, IceCube: An Instrument for Neutri no\nAstronomy, Invited Review Article: Rev. Sci. Inst. 81 (2010 )\n081101.\n[4] D.Protopopescu, K.Schwarz,PANDAGrid–aToolforPhysi cs,\nJ.Phys.: Conf. Ser. 331 (2011) 072028.\n[5] P. Buncic, A. Peters, P. Saiz, The AliEn system, sta-\ntus and perspectives, eConf C0303241 (2003) MOAT004.\narXiv:cs/0306067 .\n[6] R. Brun, F. Rademakers, ROOT - An Object Oriented Data\nAnalysis Framework, Nuclear Inst. and Meth. in Phys. Res., A\n389 (1997) 81–86.\n[7] A. Pavlo, P. Couvares, R. Gietzel, A. Karp, I. D. Alderman ,\nM. Livny, The NMI build and test laboratory: Continuous in-\ntegration framework for distributed computing software, P roc.\nUSENIX/SAGELargeInstallation SystemAdministration Con-\nference (2006) 263–273.\n[8] D. Winer, XML /RPC Specification,\nhttp://www.xmlrpc.com/spec (1999).\n[9] T. DeYoung, IceTray: A software framework for IceCube,\nInt. Conf. on Comp. in High-Energy Phys. and Nucl. Phys.\n(CHEP2004) (2005) 463–466.\n[10] R.Ellis,theExpressionEngine Development Team,Code Igniter\nUser Guide, http://codeigniter.com (online manual).\n[11] W. Allcock, et al., GridFTP: Proto-\ncol extensions to FTP for the Grid,\nhttp://www.ggf.org/documents/GWD-R/GFD-R.020.pdf\n(April 2003).\n[12] The Globus Security Team, Globus Toolkit Version 4 Grid Se-\ncurity Infrastructure: A Standards Perspective (2005).\n[13] P. Couvares, T. Kosar, A. Roy, J. Weber, K. Wenger, Work-\nflow Management in Condor, In Workflows for e-Science part\nIII (2007) 357–375.\n[14] M. G. Aartsen, et al., Measurement of South Pole ice tran s-\nparency withtheIceCubeLEDcalibration system,Nuclear In st.\nand Meth. in Phys.Res., AA711 (2013) 73–89.\n[15] D. Chirkin, Study of South Pole ice transparency with Ic eCube\nflashers, Proc. International Cosmic Ray Conference 4 (2011 )\n161.\n[16] D. Chirkin, Photon tracking with GPUs in IceCube, Nucle ar\nInst. and Meth. in Phys.Res., A725 (2013) 141–143.\n[17] A. U. Abeysekara, et al., On the sensitivity of the HAWC o b-\nservatory togamma-rayburstsHAWCCollaboration, Astropa rt.\nPhys.35 (2012) 641–650.\n[18] T. Weisgarber, Production Reconstruction, HAWC Colla bora-\ntion Meeting, May,2013 (Unpublished results).\n17",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Rw_nlgAqf9",
"year": null,
"venue": "J. Parallel Distributed Comput. 2015",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Rw_nlgAqf9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The IceProd framework: Distributed data processing for the IceCube neutrino observatory",
"authors": [
"Mark G. Aartsen",
"Rasha U. Abbasi",
"Markus Ackermann",
"Jenni Adams",
"Juan Antonio Aguilar Sánchez",
"Markus Ahlers",
"David Altmann",
"Carlos A. Argüelles Delgado",
"Jan Auffenberg",
"Xinhua Bai",
"Michael F. Baker",
"Steven W. Barwick",
"Volker Baum",
"Ryan Bay",
"James J. Beatty",
"Julia K. Becker Tjus",
"Karl-Heinz Becker",
"Segev BenZvi",
"Patrick Berghaus",
"David Berley",
"Elisa Bernardini",
"Anna Bernhard",
"David Z. Besson",
"G. Binder",
"Daniel Bindig",
"Martin Bissok",
"Erik Blaufuss",
"Jan Blumenthal",
"David J. Boersma",
"Christian Bohm",
"Debanjan Bose",
"Sebastian Böser",
"Olga Botner",
"Lionel Brayeur",
"Hans-Peter Bretz",
"Anthony M. Brown",
"Ronald Bruijn",
"James Casey",
"Martin Casier",
"Dmitry Chirkin",
"Asen Christov",
"Brian John Christy",
"Ken Clark",
"Lew Classen",
"Fabian Clevermann",
"Stefan Coenders",
"Shirit Cohen",
"Doug F. Cowen",
"Angel H. Cruz Silva",
"Matthias Danninger",
"Jacob Daughhetee",
"James C. Davis",
"Melanie Day",
"Catherine De Clercq",
"Sam De Ridder",
"Paolo Desiati",
"Krijn D. de Vries",
"Meike de With",
"Tyce DeYoung",
"Juan Carlos Díaz-Vélez",
"Matthew Dunkman",
"Ryan Eagan",
"Benjamin Eberhardt",
"Björn Eichmann",
"Jonathan Eisch",
"Sebastian Euler",
"Paul A. Evenson",
"Oladipo O. Fadiran",
"Ali R. Fazely",
"Anatoli Fedynitch",
"Jacob Feintzeig",
"Tom Feusels",
"Kirill Filimonov",
"Chad Finley",
"Tobias Fischer-Wasels",
"Samuel Flis",
"Anna Franckowiak",
"Katharina Frantzen",
"Tomasz Fuchs",
"Thomas K. Gaisser",
"Joseph S. Gallagher",
"Lisa Marie Gerhardt",
"Laura E. Gladstone",
"Thorsten Glüsenkamp",
"Azriel Goldschmidt",
"Geraldina Golup",
"Javier G. González",
"Jordan A. Goodman",
"Dariusz Góra",
"Dylan T. Grandmont",
"Darren Grant",
"Pavel Gretskov",
"John C. Groh",
"Andreas Groß",
"Chang Hyon Ha",
"Abd Al Karim Haj Ismail",
"Patrick Hallen",
"Allan Hallgren",
"Francis Halzen",
"Kael D. Hanson",
"Dustin Hebecker",
"David Heereman",
"Dirk Heinen",
"Klaus Helbing",
"Robert Eugene Hellauer III",
"Stephanie Virginia Hickford",
"Gary C. Hill",
"Kara D. Hoffman",
"Ruth Hoffmann",
"Andreas Homeier",
"Kotoyo Hoshina",
"Feifei Huang",
"Warren Huelsnitz",
"Per Olof Hulth",
"Klas Hultqvist",
"Shahid Hussain",
"Aya Ishihara",
"Emanuel Jacobi",
"John E. Jacobsen",
"Kai Jagielski",
"George S. Japaridze",
"Kyle Jero",
"Ola Jlelati",
"Basho Kaminsky",
"Alexander Kappes",
"Timo Karg",
"Albrecht Karle",
"Matthew Kauer",
"John Lawrence Kelley",
"Joanna Kiryluk",
"J. Kläs",
"Spencer R. Klein",
"Jan-Hendrik Köhne",
"Georges Kohnen",
"Hermann Kolanoski",
"Lutz Köpke",
"Claudio Kopper",
"Sandro Kopper",
"D. Jason Koskinen",
"Marek Kowalski",
"Mark Krasberg",
"Anna Kriesten",
"Kai Michael Krings",
"Gösta Kroll",
"Jan Kunnen",
"Naoko Kurahashi",
"Takao Kuwabara",
"Mathieu L. M. Labare",
"Hagar Landsman",
"Michael James Larson",
"Mariola Lesiak-Bzdak",
"Martin Leuermann",
"Julia Leute",
"Jan Lünemann",
"Oscar A. Macías-Ramírez",
"James Madsen",
"Giuliano Maggi",
"Reina Maruyama",
"Keiichi Mase",
"Howard S. Matis",
"Frank McNally",
"Kevin James Meagher",
"Martin Merck",
"Gonzalo Merino Arévalo",
"Thomas Meures",
"Sandra Miarecki",
"Eike Middell",
"Natalie Milke",
"John Lester Miller",
"Lars Mohrmann",
"Teresa Montaruli",
"Robert M. Morse",
"Rolf Nahnhauer",
"Uwe Naumann",
"Hans Niederhausen",
"Sarah C. Nowicki",
"David R. Nygren",
"Anna Obertacke",
"Sirin Odrowski",
"Alex Olivas",
"Ahmad Omairat",
"Aongus Starbuck Ó Murchadha",
"Larissa Paul",
"Joshua A. Pepper",
"Carlos Pérez de los Heros",
"Carl Pfendner",
"Damian Pieloth",
"Elisa Pinat",
"Jonas Posselt",
"P. Buford Price",
"Gerald T. Przybylski",
"Melissa Quinnan",
"Leif Rädel",
"Ian Rae",
"Mohamed Rameez",
"Katherine Rawlins",
"Peter Christian Redl",
"René Reimann",
"Elisa Resconi",
"Wolfgang Rhode",
"Mathieu Ribordy",
"Michael Richman",
"Benedikt Riedel",
"J. P. Rodrigues",
"Carsten Rott",
"Tim Ruhe",
"Bakhtiyar Ruzybayev",
"Dirk Ryckbosch",
"Sabine M. Saba",
"Heinz-Georg Sander",
"Juan Marcos Santander",
"Subir Sarkar",
"Kai Schatto",
"Florian Scheriau",
"Torsten Schmidt",
"Martin Schmitz",
"Sebastian Schoenen",
"Sebastian Schöneberg",
"Arne Schönwald",
"Anne Schukraft",
"Lukas Schulte",
"David Schultz",
"Olaf Schulz",
"David Seckel",
"Yolanda Sestayo de la Cerra",
"Surujhdeo Seunarine",
"Rezo Shanidze",
"Chris Sheremata",
"Miles W. E. Smith",
"Dennis Soldin",
"Glenn M. Spiczak",
"Christian Spiering",
"Michael Stamatikos",
"Todor Stanev",
"Nick A. Stanisha",
"Alexander Stasik",
"Thorsten Stezelberger",
"Robert G. Stokstad",
"Achim Stößl",
"Erik A. Strahler",
"Rickard Ström",
"Nora Linn Strotjohann",
"Gregory W. Sullivan",
"Henric Taavola",
"Ignacio Taboada",
"Alessio Tamburro",
"Andreas Tepe",
"Samvel Ter-Antonyan",
"Gordana Tesic",
"Serap Tilav",
"Patrick A. Toale",
"Moriah Natasha Tobin",
"Simona Toscano",
"Maria Tselengidou",
"Elisabeth Unger",
"Marcel Usner",
"Sofia Vallecorsa",
"Nick van Eijndhoven",
"Arne Van Overloop",
"Jakob van Santen",
"Markus Vehring",
"Markus Voge",
"Matthias Vraeghe",
"Christian Walck",
"Tilo Waldenmaier",
"Marius Wallraff",
"Christopher N. Weaver",
"Mark T. Wellons",
"Christopher H. Wendt",
"Stefan Westerhoff",
"Nathan Whitehorn",
"Klaus Wiebe",
"Christopher Wiebusch",
"Dawn R. Williams",
"Henrike Wissing",
"Martin Wolf",
"Terri R. Wood",
"Kurt Woschnagg",
"Donglian Xu",
"Xianwu Xu",
"Juan Pablo Yáñez Garza",
"Gaurang B. Yodh",
"Shigeru Yoshida",
"Pavel Zarzhitsky",
"Jan Ziemann",
"Simon Zierke",
"Marcel Zoll"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "PVIxO2JjdAm",
"year": null,
"venue": "ECCV (22) 2022",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=PVIxO2JjdAm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "E-Graph: Minimal Solution for Rigid Rotation with Extensibility Graphs",
"authors": [
"Yanyan Li",
"Federico Tombari"
],
"abstract": "Minimal solutions for relative rotation and translation estimation tasks have been explored in different scenarios, typically relying on the so-called co-visibility graphs. However, how to build direct rotation relationships between two frames without overlap is still an open topic, which, if solved, could greatly improve the accuracy of visual odometry. In this paper, a new minimal solution is proposed to solve relative rotation estimation between two images without overlapping areas by exploiting a new graph structure, which we call Extensibility Graph (E-Graph). Differently from a co-visibility graph, high-level landmarks, including vanishing directions and plane normals, are stored in our E-Graph, which are geometrically extensible. Based on E-Graph, the rotation estimation problem becomes simpler and more elegant, as it can deal with pure rotational motion and requires fewer assumptions, e.g. Manhattan/Atlanta World, planar/vertical motion. Finally, we embed our rotation estimation strategy into a complete camera tracking and mapping system which obtains 6-DoF camera poses and a dense 3D mesh model. Extensive experiments on public benchmarks demonstrate that the proposed method achieves state-of-the-art tracking performance.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Lrq0CTYbRZ8",
"year": null,
"venue": "E-Commerce Agents 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=Lrq0CTYbRZ8",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Economics of Dynamic Pricing in a Reputation Brokered Agent Mediated Marketplace",
"authors": [
"Giorgos Zacharia",
"Theodoros Evgeniou",
"Alexandros Moukas",
"Petros Boufounos",
"Pattie Maes"
],
"abstract": "We present a framework to study the microeconomic effects in a reputation brokered Agent mediated Knowledge Marketplace, when we introduce dynamic pricing algorithms. We study the market with computer simulations of multiagent interactions. In this marketplace, the seller reputations are updated in a collaborative fashion based on the performance of the user in the delegated tasks. To the best of our knowledge, this is the first agent mediated marketplace where the agents use dynamic pricing based on “dynamically” updated reputations. The framework can be used to investigate the different equilibria reached, based on the level of intelligence of the selling agents, the level of price-importance elasticity of the buying agents, and the level of unemployment in the marketplace. Preliminary experiments addressing these issues are presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "EIfOnsblqeC",
"year": null,
"venue": "E-Commerce Agents 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=EIfOnsblqeC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Generalized Platform for the Specification, Valuation, and Brokering of Heterogeneous Resources in Electronic Markets",
"authors": [
"Gaurav Tewari",
"Pattie Maes"
],
"abstract": "This paper describes MARI (Multi-Attribute Resource Intermediary), a project which proposes to improve online marketplaces, specifically those that involve the buying and selling of non-tangible goods and services. MARI is an intermediary architecture intended as a generalized platform for the specification and brokering of heterogeneous goods and services. MARI makes it possible for both buyers and sellers alike to more holistically and comprehensively specify relative preferences for the transaction partner, as well as for the attributes of the product in question, making price just one of a multitude of possible factors influencing the decision to trade. Ultimately, we expect that the ability to make such specifications will result in a more efficient, richer, and integrative transaction experience.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "54p-TmWDit0",
"year": null,
"venue": "E-Commerce Agents 2001",
"pdf_link": null,
"forum_link": "https://openreview.net/forum?id=54p-TmWDit0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Economics of Dynamic Pricing in a Reputation Brokered Agent Mediated Marketplace",
"authors": [
"Giorgos Zacharia",
"Theodoros Evgeniou",
"Alexandros Moukas",
"Petros Boufounos",
"Pattie Maes"
],
"abstract": "We present a framework to study the microeconomic effects in a reputation brokered Agent mediated Knowledge Marketplace, when we introduce dynamic pricing algorithms. We study the market with computer simulations of multiagent interactions. In this marketplace, the seller reputations are updated in a collaborative fashion based on the performance of the user in the delegated tasks. To the best of our knowledge, this is the first agent mediated marketplace where the agents use dynamic pricing based on “dynamically” updated reputations. The framework can be used to investigate the different equilibria reached, based on the level of intelligence of the selling agents, the level of price-importance elasticity of the buying agents, and the level of unemployment in the marketplace. Preliminary experiments addressing these issues are presented.",
"keywords": [],
"raw_extracted_content": null,
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.