metadata
dict
paper
dict
review
dict
citation_count
int64
0
0
normalized_citation_count
int64
0
0
cited_papers
listlengths
0
0
citing_papers
listlengths
0
0
{ "id": "It0615Ur3h", "year": null, "venue": "EANN Workshops 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=It0615Ur3h", "arxiv_id": null, "doi": null }
{ "title": "An overview of context types within multimedia and social computing", "authors": [ "Phivos Mylonas" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NffPcLQsCXJ", "year": null, "venue": "EANN Workshops 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=NffPcLQsCXJ", "arxiv_id": null, "doi": null }
{ "title": "Smart home context awareness based on Smart and Innovative Cities", "authors": [ "Aggeliki Vlachostergiou", "Georgios Stratogiannis", "George Caridakis", "Georgios Siolas", "Phivos Mylonas" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Ui-SOAfE2h", "year": null, "venue": "EANN (2) 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Ui-SOAfE2h", "arxiv_id": null, "doi": null }
{ "title": "A Novel Hierarchical Approach to Ranking-Based Collaborative Filtering", "authors": [ "Athanasios N. Nikolakopoulos", "Marianna A. Kouneli", "John D. Garofalakis" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rpMq6oQjUcs", "year": null, "venue": "EANN Workshops 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=rpMq6oQjUcs", "arxiv_id": null, "doi": null }
{ "title": "Detecting Irony on Greek Political Tweets: A Text Mining Approach", "authors": [ "Basilis Charalampakis", "Dimitris Spathis", "Elias Kouslis", "Katia Kermanidis" ], "abstract": "The present work describes the classification schema for irony detection in Greek political tweets. The proposed approach relies on limited labeled training data, and its performance on a larger unlabeled dataset is evaluated qualitatively (implicitly) via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. The machine learning results on the labeled dataset were highly encouraging and uncovered a trend whereby the volume of ironic tweets can predict the fluctuation from previous elections.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NtRbCoARegu", "year": null, "venue": "EANN (1) 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=NtRbCoARegu", "arxiv_id": null, "doi": null }
{ "title": "Boosting Simplified Fuzzy Neural Networks", "authors": [ "Alexey Natekin", "Alois C. Knoll" ], "abstract": "Fuzzy neural networks are a powerful machine learning technique, that can be used in a large number of applications. Proper learning of fuzzy neural networks requires a lot of computational effort and the fuzzy-rule designs of these networks suffer from the curse of dimensionality. To alleviate these problems, a simplified fuzzy neural network is presented. The proposed simplified network model can be efficiently initialized with considerably high predictive power. We propose the ensembling approach, thus, using the new simplified neural network models as the type of a general-purpose fuzzy base-learner. The new base-learner properties are analyzed and the practical results of the new algorithm are presented on the robotic hand controller application.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xJf-c40Kb7", "year": null, "venue": "EANN (1) 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=xJf-c40Kb7", "arxiv_id": null, "doi": null }
{ "title": "Impact of Sampling on Neural Network Classification Performance in the Context of Repeat Movie Viewing", "authors": [ "Elena Fitkov-Norris", "Sakinat Oluwabukonla Folorunso" ], "abstract": "This paper assesses the impact of different sampling approaches on neural network classification performance in the context of repeat movie going. The results showed that synthetic oversampling of the minority class, either on its own or combined with under-sampling and removal of noisy examples from the majority class offered the best overall performance. The identification of the best sampling approach for this data set is not trivial since the alternatives would be highly dependent on the metrics used, as the accuracy ranks of the approaches did not agree across the different accuracy measures used. In addition, the findings suggest that including examples generated as part of the oversampling procedure in the holdout sample, leads to a significant overestimation of the accuracy of the neural network. Further research is necessary to understand the relationship between degree of synthetic over-sampling and the efficacy of the holdout sample as a neural network accuracy estimator.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "X-kBFqi8Osc", "year": null, "venue": "EANN Workshops 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=X-kBFqi8Osc", "arxiv_id": null, "doi": null }
{ "title": "Comparison of three classifiers for breast cancer outcome prediction", "authors": [ "Noa Eyal", "Mark Last", "Eitan Rubin" ], "abstract": "Predicting the outcome of cancer is a challenging task; researchers have an interest in trying to predict the relapse-free survival of breast cancer patients based on gene expression data. Data mining methods offer more advanced approaches for dealing with survival data. The main objective in cancer treatment is to improve overall survival or, at the very least, the time to relapse (\"relapse-free survival\"). In this work, we compare the performance of three popular interpretable classifiers (decision tree, probabilistic neural networks and Naïve Bayes) for the task of classifying breast cancer patients into recurrence risk groups (low or high risk of recurrence within 5 or 10 years). For the 5-year recurrence risk prediction, the highest prediction accuracy was reached by the probabilistic neural networks classifier (Acc = 76.88% ± 1.09%, AUC=77.41%). For the 10-year recurrence risk prediction, the decision tree classifier and the probabilistic neural networks presented similar prediction accuracies (70.40% ± 1.36% and 70.50% ± 1.13%, respectively). However, while the PNN classifier achieved this accuracy using only 10 features with the highest information gain, the decision tree classifier needed 100 features to achieve comparable accuracy and its AUC was significantly lower (66.4% vs. 77.1%).", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rW5Iwg97dgw", "year": null, "venue": "EANN (1) 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=rW5Iwg97dgw", "arxiv_id": null, "doi": null }
{ "title": "SCH-EGA: An Efficient Hybrid Algorithm for the Frequency Assignment Problem", "authors": [ "Shaohui Wu", "Gang Yang", "Jieping Xu", "Xirong Li" ], "abstract": "This paper proposes a hybrid stochastic competitive Hopfield neural network-efficient genetic algorithm (SCH-EGA) approach to tackle the frequency assignment problem (FAP). The objective of FAP is to minimize the cochannel interference between satellite communication systems by rearranging the frequency assignments so that they can accommodate the increasing demands. In fact, as SCH-EGA algorithm owns the good adaptability, it can not only deal with the frequency assignment problem, but also cope with the problems of clustering, classification, the maximum clique problem and so on. In this paper, we first propose five optimal strategies to build an efficient genetic algorithm(EGA) which is the component of our hybrid algorithm. Then we explore different hybridizations between the Hopfield neural network and EGA. With the help of hybridization, SCH-EGA makes up for the defects in the Hopfield neural network and EGA while fully using the advantages of the two algorithms.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "H0X6MDYw-j_", "year": null, "venue": "EANN (1) 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=H0X6MDYw-j_", "arxiv_id": null, "doi": null }
{ "title": "SCH-EGA: An Efficient Hybrid Algorithm for the Frequency Assignment Problem", "authors": [ "Shaohui Wu", "Gang Yang", "Jieping Xu", "Xirong Li" ], "abstract": "This paper proposes a hybrid stochastic competitive Hopfield neural network-efficient genetic algorithm (SCH-EGA) approach to tackle the frequency assignment problem (FAP). The objective of FAP is to minimize the cochannel interference between satellite communication systems by rearranging the frequency assignments so that they can accommodate the increasing demands. In fact, as SCH-EGA algorithm owns the good adaptability, it can not only deal with the frequency assignment problem, but also cope with the problems of clustering, classification, the maximum clique problem and so on. In this paper, we first propose five optimal strategies to build an efficient genetic algorithm(EGA) which is the component of our hybrid algorithm. Then we explore different hybridizations between the Hopfield neural network and EGA. With the help of hybridization, SCH-EGA makes up for the defects in the Hopfield neural network and EGA while fully using the advantages of the two algorithms.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "adXd_elcCv", "year": null, "venue": "EANN Workshops 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=adXd_elcCv", "arxiv_id": null, "doi": null }
{ "title": "Detecting Irony on Greek Political Tweets: A Text Mining Approach", "authors": [ "Basilis Charalampakis", "Dimitris Spathis", "Elias Kouslis", "Katia Kermanidis" ], "abstract": "The present work describes the classification schema for irony detection in Greek political tweets. The proposed approach relies on limited labeled training data, and its performance on a larger unlabeled dataset is evaluated qualitatively (implicitly) via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. The machine learning results on the labeled dataset were highly encouraging and uncovered a trend whereby the volume of ironic tweets can predict the fluctuation from previous elections.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "A6pYK9FhVJ1", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=A6pYK9FhVJ1", "arxiv_id": null, "doi": null }
{ "title": "Interesting paper blending MPC with RL, limited by experimental evaluation.", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_KBOtZlIrFf", "year": null, "venue": "ECIR 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_KBOtZlIrFf", "arxiv_id": null, "doi": null }
{ "title": "QweetFinder: Real-Time Finding and Filtering of Question Tweets", "authors": [ "Ameer Albahem", "Maram Hasanain", "Marwan Torki", "Tamer Elsayed" ], "abstract": "Users continuously ask questions and seek answers in social media platforms such as Twitter. In this demo, we present QweetFinder, a Web-based search engine that facilitates finding question tweets (Qweets) in Twitter. QweetFinder listens to Twitter live stream and continuously identifies and indexes tweets that are answer-seeking. QweetFinder also allows users to save queries of long-term interest and pushes real-time qweet matches of saved queries to them via e-mail.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GZMHOEJoEu9", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=GZMHOEJoEu9", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0E_qe5KGHMc", "year": null, "venue": "IEICE Trans. Inf. Syst. 2019", "pdf_link": "https://www.jstage.jst.go.jp/article/transinf/E102.D/3/E102.D_2018EDL8199/_pdf", "forum_link": "https://openreview.net/forum?id=0E_qe5KGHMc", "arxiv_id": null, "doi": null }
{ "title": "Eager Memory Management for In-Memory Data Analytics", "authors": [ "Hakbeom Jang", "Jonghyun Bae", "Tae Jun Ham", "Jae W. Lee" ], "abstract": "This paper introduces e-spill, an eager spill mechanism, which dynamically finds the optimal spill-threshold by monitoring the GC time at runtime and …", "keywords": [], "raw_extracted_content": "632IEICE TRANS. INF. & SYST., VOL.E102–D, NO.3 MARCH 2019\nLETTER\nEager Memory Management for In-Memory Data Analytics∗\nHakbeom JANG†a),Student Member , Jonghyun BAE††, Tae Jun HAM††,Nonmembers ,\nandJae W. LEE††,Member\nSUMMARY This paper introduces e-spill , an eager spill mechanism,\nwhich dynamically finds the optimal spill-threshold by monitoring the GCtime at runtime and thereby prevent expensive GC overhead. Our e-spill\nadopts a slow-start model to gradually increase the spill-threshold until itreaches the optimal point without substantial GCs. We prototype e-spill as\nan extension to Spark and evaluate it using six workloads on three di ffer-\nent parallel platforms. Our evaluations show that e-spill improves perfor-\nm a n c eb yu pt o3 . 8 0 ×and saves the cost of cluster operation on Amazon\nEC2 cloud by up to 51% over the baseline system following Spark TuningGuidelines.key words: in-memory computing, spark, garbage collection, data spill\n1. Introduction\nModern in-memory data analytic frameworks such as\nApache Spark [1]and Ignite [2]are rapidly gaining popu-\nlarity with their ability to provide orders of magnitude per-formance improvements over Hadoop MapReduce [3]on\nworkloads with frequent data reuse (e.g., iterative algo-\nrithms). However, this performance gain is reduced whenthe memory footprint exceeds an available memory size. Forexample, in case of Spark, such scenario can lead to a signif-\nicant amount of garbage collection (GC) operations which\ncan account for nearly 50% [4]of an execution time, thus\nincurring more than a 2x system slowdown.\nPrevious proposals address this challenge by i) adjust-\ning the working set size by tuning task granularity and paral-\nlelism [5]or ii) moving large objects to outside the heap (i.e.,\nJVM heap) [6]. In addition, the conventional in-memory\nprocessing systems provide spill-mechanism that serializes\nthe partially created data and writes it to the local disk. Eachrunning task (i.e., thread) estimates the size of objects cre-\nated at runtime and triggers a spill operation when the esti-\nmated volume of the task reaches a certain spill-threshold,thus avoiding expensive GC overhead (especially for major\nManuscript received September 13, 2018.\nManuscript revised November 7, 2018.\nManuscript publicized December 11, 2018.\n†The author is with the College of Information and Communi-\ncation Engineering, Sungkyunkwan University, Suwon, Korea.\n††The authors are with the Dept. of Computer Science and En-\ngineering, Seoul National University, Seoul, Korea.\n∗This work was supported by a research grant from Samsung\nElectronics, by IDEC (EDA tool), and by Institute for Information\n& communications Technology Promotion (IITP) grant funded bythe Korea government (MSIT) (No. B0101-17-0644, Research on\nHigh Performance and Scalable Manycore OS).\na) E-mail: [email protected]\nDOI: 10.1587/transinf.2018EDL8199GC). This feature not only alleviates the memory pressure\nbut also improves the performance of system by avoidingtime-consuming GC operations.\nHowever, this benefit is not always ensured. One crit-\nical issue is that the actual size of objects in Java Virtual\nMachine (JVM) does notmatch the estimates used by the\nupper-layer in-memory processing frameworks. To confirmthis point, we run a standalone Spark and measure an execu-\ntion time across varying spill-thresholds on a 4-node homo-\ngeneous cluster. Figure 1 shows an execution time break-down of the reduce stage in Intel HiBench TeraSort work-load. By default, in Spark 2.1.0, the spill-threshold is setto 60% of the JVM heap size (equal to Spark memory).\nHowever, as shown in the figure, the run with the default\nspill-threshold (the leftmost bar) still incurs considerableGC time. Towards the right, as the spill-threshold decreases,the GC time quickly reduces due to the reduced memorypressure. On the other hand, a frequent spill also means that\neach task needs to unnecessarily spill more data, resulting\nin more compute times. In this case, default spill-threshold×1/16 reaches its optimal point in terms of the total task ex-\necution time.\nThis work introduces e-spill , an eager spill mechanism,\nwhich dynamically finds the optimal spill-threshold by mon-\nitoring the GC time at runtime and thereby preventing ex-pensive GC operations. The proposed e-spill adopts a slow-\nstart model to gradually increase the spill-threshold until it\nreaches the optimal point without substantial GCs. We pro-\ntotype e-spill as an extension to Spark and evaluate it us-\ning six workloads on three di fferent platforms: (1) a 4-node\nhomogeneous cluster with 64 fat cores (Intel Xeon), (2) asingle-node Intel Knights Landing (KNL) machine with 64\nthin cores (Intel Xeon Phi), and (3) a virtualized 64-node\nSpark cluster on Amazon EC2 with 256 fat cores (IntelXeon). The proposed e-spill achieves a geomean speedup\nFig. 1 GC overhead when varying spill threshold\nCopyright c/circlecopyrt2019 The Institute of Electronics, Information and Communication Engineers\nLETTER\n633\nof 1.71×on a 4-node homogeneous cluster and 1.36 ×on a\nsingle-node KNL machine. Furthermore, e-spill achieve a\ngeomean speedup of 1.30 ×and reduces the operating cost\nby 23% on a virtualized 64-node cluster.\nOur contributions can be summarized as follows:\n•Analysis of the spill mechanism of Apache Spark [1],a\npopular in-memory data analytic framework, as a knob\nto control GC overhead\n•Design and implementation of e-spill on Spark, which\ndynamically finds the optimal spill-threshold by moni-toring the GC time at runtime and avoids excessive GC\noperations\n•Detailed evaluation and analysis of e-spill performance\non three different parallel platforms\n2.e-spill : Eager Spill Mechanism\nThe proposed e-spill is a low-cost runtime framework that\nfinds the optimal spill-threshold for in-memory processingframeworks, to provide robust performance for various plat-forms without requiring workload-dependent information.\n2.1 Background and OverviewData spill is the process of storing partially created interme-\ndiate result to a local disk during task execution to preventexpensive GCs and OutOfMemoryError s resulting from the\nlack of Spark execution memory. A user program in Spark\nis described as a sequence of operations on Resilient Dis-tributed Datasets (RDDs), which are the primary data ab-straction for Spark. A spill operation can occur if all datahas to be collected in a single bu ffer to create a shufflefi l e ,\nor if multiple RDD partitions need to temporally store inter-\nmediate results to generate one RDD (e.g., Join andZip).\nInitially, Spark allocates a small bu ffer (e.g., 5MB) to store\nthe intermediate result. When the size of the bu ffer becomes\ninsufficient, it doubles the size of the bu ffer. The maxi-\nmum size that the bu ffer can reach is the total Spark exe-\ncution memory divided by the number of currently runningSpark worker cores. When a spill operation occurs, it se-rializes key-value pairs stored in the bu ffer one by one and\nflushes them to a spill file in a certain batch unit (default:\n10000 key-value pairs). After all key-values pairs are writ-\nten, Spark manages the spill file as a list and allocates a newbuffer for the remaining key-value operations. This process\nis repeated until all the key-values of the partition are com-\nputed. After all the key-value operations are complete, the\nintermediate result stored in the spill file is merged with theremaining results in memory. The final result is stored in theshuffle file or is passed to the next operation.\nIdeally, it is possible to avoid time-consuming ma-\njor GCs by spilling objects which are located in the old-\ngeneration heap space of JVM before the heap is full. How-ever, in reality, the estimator often mis-predicts the volumeof the task (i.e., key-value bu ffer) and thus cannot e ffectively\navoid such major GCs. From JVM’s perspective, the spilled\nFig. 2 e-spill overview\nAlgorithm 1 e-spill runtime\nINPUT: JVM heap size, # of available CPU cores, spill time, GC time\nOUTPUT: Spill-threshold\n1:while Key-value iterator has next item do\n2: ifFirst key-value pair is inserted then\n3: Estimate size of first key-value pair4: Set initial spillThreshold∝\n5: JVM heap size /# of Spark threads /key-value size\n6: end if\n7: Insert key-value pair in temporal bu ffer\n8: if# of key-value pair >spillThreshold then\n9: Spill occurred\n10: ifgcTime >spillTime *0 . 1 then\n11: Update spillThreshold/2\n12: else\n13: Update spillThreshold *1 . 2 5\n14: end if\n15: end if\n16:end while\nobject is located in the old-generation heap space of JVM\nand this object is not freed until the entire spill process is\ncompleted. Meanwhile, Spark continues to generate smallheap objects, which occasionally get promoted to the old-generation heap, and requests extra space there. This trig-\ngers frequent major GCs [5].\nThe proposed e-spill mainly aims to avoid frequent ma-\njor GCs caused by spill operation due to the mis-prediction.Figure 2 shows the overall structure of e-spill , which extends\nthe existing Spark’s spill-mechanism shaded in gray. The e-\nspillruntime collects the spill time and GC time during task\nexecution to monitor the time spent on GC caused by spilloperations. To realize this, e-spill maintains a feedback loop\nbetween the master node and worker nodes. Finally, e-spill\ndetermines whether to increase or decrease a spill-threshold\nbased on the ratio of spill time toGC time .\n2.2 Runtime Algorithm\nThe proposed e-spill runtime starts with a calibration phase\nin which a slow-start model gradually increases the spill-threshold to find the optimal threshold without GCs. Algo-rithm 1 shows the e-spill runtime algorithm. After estimat-\ning the first <K,V>pair size, this algorithm determines an\n634IEICE TRANS. INF. & SYST., VOL.E102–D, NO.3 MARCH 2019\nFig. 3 Normalized speedups on two platforms: (a) native 4-node cluster (b) manycore platform\nTable 1 Setup for three evaluation platforms\nNative clusterKnights Landing\n(KNL)Amazon\nEC2 cluster\nCPUIntel Xeon E5-2640v3\n×2 socketsIntel Xeon Phi 7210Intel Xeon\nE5-2676v4\nMemory 16GB DDR4×816GB MCDRAM &\n32GB DDR4×616GB\nDisk NVMe SSD 1.6TB NVMe SSD 1.6TB EBS 100GB\nNetwork40Gbps\nInfiniBand1GbpsEthernet 10GbpsEthernet\nTable 2 Workload characteristics\nWorkloadsData Size\nCluster KNL AWS\nTeraSort 128GB 80GB 1TB\nPageRank pages: 25M pages: 15M pages: 100M\nSort 128GB 80GB 1TB\nWordCount 240GB 128GB 5TB\nBayes\nclassificationpages: 20M\nclasses: 20Kpages: 10M\nclasses: 10Kpages: 160M\nclasses: 160K\nKmeans samples: 200M samples: 150M samples: 400M\ninitial spill-threshold based on a given JVM heap size and\nthe number of Spark worker cores. As the task executes, the\ne-spill gradually increases the spill-threshold until excessive\nGCs does not occur. If a spill occurs, the threshold is halvedif the ratio of the time spent on spill to GC is greater than10%. Otherwise, if the time spent on GC is short enough(less than 10% of the time spent on spill), e-spill increases\nthe spill-threshold to 1.25 ×the current threshold to optimize\nmemory usage. After a round of operation is complete, theoptimal spill-threshold found in the previous round is uti-lized in the next round to avoid redundancy. Since the char-acteristics of functions (transformations) are di fferent across\ncomputation phases (i.e., stages), e-spill does not reuse the\nspill threshold across di fferent stages and searches a new\nspill-threshold from scratch.\n3. MethodologyWe run workloads from Intel HiBench 5.0 [7]on three dif-\nferent parallel processing platforms as shown in Table 1. Ta-\nble 2 summarizes inputs for the six applications. We com-pare e-spill to two designs:\n•Baseline: This is a vanilla Spark following Spark Tun-\ning Guidelines [8]. Each worker node has four Spark\nexecutors, and each executor runs 4 threads with a 5GB\nmemory. In the KNL platform, we use four executors\nand each executor runs 16 threads with a 20GB mem-ory. Lastly, in the virtualized cluster platform, eachexecutor runs four threads with a 10GB memory.\n•Static Optimal: We perform an exhaustive search tofind the best-performing partition count for each stage.\nSuch optimized partition count does not incur neither\nspills nor GCs. This number serves as the (impractical)theoretical maximum performance.\n4. Evaluation\n4.1 Program Speedups\nOn 4-node Homogeneous Cluster. Figure 3 (a) compares\nthe speedups of two designs over the baseline configura-tion. The proposed e-spill achieves a geomean speedup of\n1.71×, with a maximum speedup of 3.80 ×. More impor-\ntantly, e-spill achieves the robust performance comparable\nto the static optimal configuration.\nThe first three applications in Fig. 3 (a) are shu ffle-\nheavy workloads, which frequently trigger major GCs.TeraSort and Sort use sortByKey transformation, which re-\nsults in the shuffle of the large data (i.e, entire RDD) be-\ntween tasks. PageRank is an iterative algorithm that joinsthe intermediate output and a cached RDD and thus havea large memory footprint. In general, shu ffle-heavy work-\nloads utilize a large amount of memory. The primary source\nof performance improvement in e-spill is the reduced over-\nhead of expensive GC operations. Figure 4 shows a taskexecution time breakdown of the map and reduce stage forTeraSort (shuffle-heavy). The proposed e-spill reduces the\ntime spent on GC in TeraSort by 91% with only a modest\nincrease in spill time.\nThe remaining three applications are classified as\nshuffle-light workloads. Their memory footprints are rela-\ntively small since these applications trigger the shu ffle dur-\ning a summary transformation, such as reduceByKey .F o r\nWordCount and Bayes, e-spill achieves speedup of 1.15 ×\nand 1.25×, respectively. In case of Kmeans, the baseline\nperforms well on the cluster and thus all designs show sim-ilar performance. Overall, e-spill obtains the performance\nclose to the performance of the static optimal configuration.\nOn Single-Node Manycore Machine. Figure 3 (b) shows\nthe speedups of e-spill on a 64-core Intel KNL platform. For\nthis platform, we use smaller inputs to reduce task failures,which occur frequently with the original input size as shown\nin Table 2. Furthermore, since several programs do not run\nto completion with the baseline configuration [8],w eu s ea\nslightly-tuned baseline configuration (i.e., increased numberof partitions) which allows the workloads to complete with-out experiencing significant major GC overhead. Since the\nLETTER\n635\nFig. 4 Execution time breakdown for TeraSort\nFig. 5 Normalized speedups on cloud for 64-node Amazon EC2 cluster\ninput size and the baseline configuration are di fferent from\nthose of the 4-node homogeneous cluster, the static optimal\ntrend is different in this configuration. Overall, e-spill also\nworks well on the single-node manycore system with 64 thincores for all workloads with a geomean speedup of 1.36 ×\nand a maximum speedup of 2.68 ×.\nOn 64-node Amazon EC2 Cluster. We evaluate e-spill on\na 64-node Amazon EC2 cluster with 256 fat cores to confirmthe robustness of it on a large scale cluster. Figure 5 showsperformance improvements and cost reductions from e-spill\non a 64-node cluster. The result shows that e-spill achieves\na geomean speedup of 1.30 ×and reduces the operating cost\nby 23%. Note that the operation cost is the financial costof running Amazon EC2 cluster for the required executiontime. To calculate the operating cost, we calculate the cost\nof running the 64 m4.xlarge instances. Each node consumes\n$0.246/h and $0.016/h for storage. We multiplied these val-\nues by the execution time to compute the cost.\n4.2 Performance AnalysisComparison with WASP scheduler. We compare e-\nspillwith WASP, a state-of-the-art Spark task scheduler [5].\nWASP jointly optimizes both task granularity and paral-\nlelism based on workload characteristics. WASP requires\ncalculation of memory amplification factor (MAF) of each\ntransformation function from various workloads to predictthe memory usage of each stage with the goal of avoid-\ning GC overheads. Figure 6 shows the speedup of WASP\nand e-spill on a 64-node virtualized Spark cluster with three\nshuffle-heavy workloads. Unlike WASP, e-spill does not re-\nquire any offline profiling and outperforms it by up to 1.24 ×.\nThe main source of improvement of e-spill is the reduced\nshuffle overhead. For example, we observed that WASP in-\ncreases the number of partitions to 512-16384 between mapand reduce stage in TeraSort. Such increase (i.e., a singlepartition gets smaller), leads to a reduction in the memoryusage, and thus can eliminate GC overheads. However, suchFig. 6 WASP scheduler [5]vs.e-spill\nFig. 7 Speedups across varying ratio of the time spent on spill to GC. All\nnumbers are normalized to the baseline.\nincrease in the number of partitions also incurs substantial\noverhead for shuffle operations [9]. Since e-spill does not in-\ncrease the number of partitions, e-spill achieves more robust\nperformance than WASP for shu ffle-heavy workloads.\nSensitivity on spill /GC ratio. Figure 7 shows average\nspeedups of three shu ffle-heavy workloads across varying\nratio of the time spent on the spill to GC. Our experimentdemonstrates that 10% is the optimal value.\n5. ConclusionThis paper introduces e-spill , an eager spill mechanism,\nwhich dynamically finds the optimal spill-threshold by mon-itoring a GC time during a runtime. Our e-spill achieves\na robust performance on three di fferent parallel platforms\nwithout requiring any workload-dependent tuning parame-\nters. Our evaluation on Spark shows that e-spill achieves a\ngeomean speedup of 1.71 ×on a 4-node homogeneous clus-\nter and 1.36×on a single-node KNL machine. Furthermore,\ne-spill achieve a geomean speedup of 1.30 ×and can reduce\nthe operating cost of a virtualized 64-node cluster by 23%.\nReferences\n[1] “Apache Spark.” http: //spark.apache.org/.\n[2] “Apache Ignite.” https: //ignite.apache.org/.\n[3] “Apache Hadoop.” http: //hadoop.apache.org /.\n[4] K. Nguyen, L. Fang, G. Xu, B. Demsky, S. Lu, S. Alamian, and O.\nMutlu, “Yak: A high-performance big-data-friendly garbage collec-\ntor,” Proceedings of the 12th USENIX Conference on Operating Sys-\ntems Design and Implementation, OSDI’16, pp.349–365, 2016.\n[5]J. Bae, H. Jang, W. Jin, J. Heo, J. Jang, J.-Y. Hwang, S. Cho, and\nJ.W. Lee, “Jointly optimizing task granularity and concurrency for in-\nmemory mapreduce frameworks,” 2017 IEEE International Confer-ence on Big Data (Big Data), pp.130–140, Dec. 2017.\n[6]K. Nguyen, K. Wang, Y. Bu, L. Fang, J. Hu, and G. Xu, “Facade: A\ncompiler and runtime for (almost) object-bounded big data applica-tions,” Proceedings of the Twentieth International Conference on Ar-\n636IEICE TRANS. INF. & SYST., VOL.E102–D, NO.3 MARCH 2019\nchitectural Support for Programming Languages and Operating Sys-\ntems, ASPLOS ’15, New York, NY, USA, pp.675–690, ACM, 2015.\n[7] “Intel HiBench.” https: //github.com/intel-hadoop/HiBench.\n[8] “Apache Spark: Tuning Spark.” http: //spark.apache.org/docs/latest/\ntuning.html/.[9]H. Zhang, B. Cho, E. Seyfe, A. Ching, and M.J. Freedman, “Ri ffle:\nOptimized shuffle service for large-scale data analytics,” Proceedings\nof the Thirteenth EuroSys Conference, EuroSys ’18, New York, NY,\nUSA, pp.43:1–43:15, ACM, 2018.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9uWKcBlMcuU", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=9uWKcBlMcuU", "arxiv_id": null, "doi": null }
{ "title": "The empirical results are promising, but the approach is not clearly motivated.", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "U0Lll9DN0m", "year": null, "venue": "e-Energy 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=U0Lll9DN0m", "arxiv_id": null, "doi": null }
{ "title": "UrJar: A Device to Address Energy Poverty Using E-Waste", "authors": [ "Vikas Chandan", "Mohit Jain", "Harshad Khadilkar", "Zainul Charbiwala", "Anupam Jain", "Sunil Kumar Ghai", "Rajesh Kunnath", "Deva P. Seetharam" ], "abstract": "A significant portion of the population in India does not have access to reliable electricity. At the same time, is a rapid penetration of Lithium Ion battery-operated devices such as laptops, both in the developing and developed world. This generates a significant amount of electronic waste (e-waste), especially in the form of discarded Lithium Ion batteries. In this work, we present UrJar, a device which uses re-usable Lithium Ion cells from discarded laptop battery packs to power low energy DC devices. We describe the construction of the device followed by findings from field deployment studies in India. The participants appreciated the long duration of backup power provided by the device to meet their lighting requirements. Through our work, we show that UrJar has the potential to channel e-waste towards the alleviation of energy poverty, thus simultaneously providing a sustainable solution for both problems. Mode details of this work are provide in [3].", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0X8oeTgaf8G", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=0X8oeTgaf8G", "arxiv_id": null, "doi": null }
{ "title": "A valuable dataset for multi-modal E-commerce research", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "umyThMd0-za", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=umyThMd0-za", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "IuUJpNzsi1q", "year": null, "venue": "ECIR 2012", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=IuUJpNzsi1q", "arxiv_id": null, "doi": null }
{ "title": "Retro: Time-Based Exploration of Product Reviews", "authors": [ "Jannik Strötgen", "Omar Alonso", "Michael Gertz" ], "abstract": "Most e-commerce websites organize and present product reviews around ratings with hardly any feature to view them in a time-oriented way. Often, there is a way to sort reviews by time but no further temporal analysis is possible. Thus, usually, only few reviews are part of a user’s review analysis process, and there is no way to analyze all reviews of a product collectively. In this paper, we describe Retro, a search engine for exploring product reviews using temporal information.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lcQPOlM_4Ow", "year": null, "venue": "EACL 1993", "pdf_link": "https://aclanthology.org/E93-1057.pdf", "forum_link": "https://openreview.net/forum?id=lcQPOlM_4Ow", "arxiv_id": null, "doi": null }
{ "title": "A Morphological Analysis Based Method for Spelling Correction", "authors": [ "Itziar Aduriz", "Eneko Agirre", "Iñaki Alegria", "Xabier Arregi", "Jose Maria Arriola", "Xabier Artola", "Arantza Díaz de Ilarraza Sánchez", "Nerea Ezeiza", "Montse Maritxalar", "Kepa Sarasola", "Miriam Urkia" ], "abstract": "I. Aduriz, E. Agirre, I. Alegria, X. Arregi, J.M Arriola, X. Artola, A. Diaz de Ilarraza, N. Ezeiza, M. Maritxalar, K. Sarasola, M. Urkia. Sixth Conference of the European Chapter of the Association for Computational Linguistics. 1993.", "keywords": [], "raw_extracted_content": "A Morphological Analysis Based Method for Spelling Correction \nAduriz I., Agirre E., Alegria I., Arregi X., Arriola J.M, Artola X., Diaz de Ilarraza A., \nEzeiza N., Maritxalar M., Sarasola K., Urkia M.(*) \nInformatika Fakultatea, Basque Country University. P.K. 649. 20080 DONOSTIA (Basque Country) \n(*) U.Z.E.I. Aldapeta, 20. 20009 DONOSTIA (Basque Country) \n1 Introduction \nXuxen is a spelling checker/corrector for Basque which \nis going to be comercialized next year. The checker \nrecognizes a word-form if a correct morphological \nbreakdown is allowed. The morphological analysis is \nbased on two-level morphology. \nThe correction method distinguishes between ortho- \ngraphic errors and typographical errors. \n• Typographical errors (or misstypings) are uncogni- \ntive errors which do not follow linguistic criteria. \n• Orthographic errors are cognitive errors which occur \nwhen the writer does not know or has forgotten the \ncorrect spelling for a word. They are more persistent \nbecause of their cognitive nature, they leave worse \nimpression and, finally, its treatment is an interest- \ning application for language standardization purposes. \n2 Correction Method in Xuxen \nThe main problems found in designing the \nchecking/correction strategy were: \n• Due to the high level of inflection of Basque, it is \nimpossible to store every word-form in a dictionary; \ntherefore, the mainstream checking/correction \nmethods were not suitable. \n• Because of the recent standardization and widespread \ndialectal use of Basque, orthographic errors are more \nlikely and therefore their treatment becomes critical. \n• The word-forms which are generated without \nlinguistic knowledge must be fed into the spelling \nchecker to check whether they are correct or not. \nIn order to face these issues the strategy used is \nbasically the following (see also Figure 1). \nHandling orthographic errors \nThe treatment of orthographic errors is based on the \nparallel use of a two-level subsystem designed to detect \nmisspellings previously typified. This subsystem has \ntwo main components: \n• Additional two-level rules describing the most likely \nchanges that are produced in the orthographic errors. \nTwenty five new rules have been defined to cover the \nmost common orthographic errors. For instance, the \nrule h: 0 => V:V V:V describes that between \nvowels the h of the lex-:cal level may dissapear in the \nsurface. In this way bear, typical misspelling of \nbehar (to need), will be detected and corrected. \n• Additional morphemes linked to the corresponding \ncorrect ones. They describe particular errors, mainly \ndialectal forms. Thus, using the new entry tikan, \ndialectal form of the ablative singular, the system is \nable to detect and correct word-forms as etxe- tikan, kaletikan .... (vm4ants of etxetik \n(from me home), kaletik (from me s~eeO .... ) \n~ I~ L --,,~'~', J '=='= \nFigure 1 - Correcting strategy in Xuxen \nWhen a word-form is not accepted by the checker the \northographic error subsystem is added and the system \nretries the morphological checking. If the incorrect form \ncan be recognized now (1) the correct lexical level form \nis directly obtained and, (2) as the two-level system is \nbidirectional, the corrected surface form will be \ngenerated from the lexical form. \nFor example, the complete correction process of the \nword-form beartzetikan (from the need), would be \nthe following: \nbeart zet ikan \n$ (t) \nbehar tze tikan(tik) \n~L (2) \nbehartzetik \nHandling tyPographical errors \nThe treatment of typographical errors is quite \nconventional and performs the following steps: \n• Generating proposals to typographical errors using \nDamerau's classification. \n• Trigram analysis. Proposals with trigrams below a \ncertain probability treshold are discarded, while the \nrest are classified in order of trigramic probability. \n• Spelling checking of proposals. \nTo speed up this treatment the following techniques \nhave been used: \n• If during the original morphological checking of the \nmisspelled word a correct morpheme has been found, \nthe criteria of Damerau are applied only to the unre- \ncognized part. Moreover, on entering the proposals \ninto the checker, the analysis starts from the state it \nwas at the end of the last recognized morpheme. \n• The number of proposals is also limited by filtering \nthe words containing very low frequency u'igrams. \n463", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ma5oAs7azyo", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=ma5oAs7azyo", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LiqQmB3nn7V", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=LiqQmB3nn7V", "arxiv_id": null, "doi": null }
{ "title": null, "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yTL4din8RTcu", "year": null, "venue": "Mashups/Dataview@ECOWS 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=yTL4din8RTcu", "arxiv_id": null, "doi": null }
{ "title": "Query splitting techniques and search service recommendation for multi-domain natural language queries", "authors": [ "Alessandro Bozzon", "Marco Brambilla" ], "abstract": "Current general purpose search engines are not able to address multi-domain queries, i.e., queries that span over several domains of interest. In this paper we address the general problem of natural language multi-domain search queries, seen as the first step of the information exploration process. We propose an approach that splits the queries into domain-specific subqueries, and suggests a vertical search engine API to be invoked. The user may be asked for the missing details needed to invoke the API and a first resultset can be retrieved (and then explored).", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "WtNzjuJLcOM", "year": null, "venue": "Mashups/Dataview@ECOWS 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=WtNzjuJLcOM", "arxiv_id": null, "doi": null }
{ "title": "Web-based multi-view visualizations for aggregated statistics", "authors": [ "Daniel Hienert", "Benjamin Zapilko", "Philipp Schaer", "Brigitte Mathiak" ], "abstract": "With the rise of the open data movement a lot of statistical data has been made publicly available by governments, statistical offices and other organizations. First efforts to visualize are made by the data providers themselves. Data aggregators go a step beyond: they collect data from different open data repositories and make them comparable by providing data sets from different providers and showing different statistics in the same chart. Another approach is to visualize two different indicators in a scatter plot or on a map. The integration of several data sets in one graph can have several drawbacks: different scales and units are mixed, the graph gets visually cluttered and one cannot easily distinguish between different indicators. Our approach marks a combination of (1) the integration of live data from different data sources, (2) presenting different indicators in coordinated visualizations and (3) allows adding user visualizations to enrich official statistics with personal data. Each indicator gets its own visualization, which fits best for the individual indicator in case of visualization type, scale, unit etc. The different visualizations are linked, so that related items can easily be identified by using mouse over effects on data items.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_CMmj2UrN4l", "year": null, "venue": "Mashups/Dataview@ECOWS 2011", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_CMmj2UrN4l", "arxiv_id": null, "doi": null }
{ "title": "ToMaTo: a trustworthy code mashup development tool", "authors": [ "Jian Chang", "Krishna K. Venkatasubramanian", "Andrew G. West", "Sampath Kannan", "Oleg Sokolsky", "Myuhng Joo Kim", "Insup Lee" ], "abstract": "Recent years have seen the emergence of a new programming paradigm for Web applications that emphasizes the reuse of external content, the mashup. Although the mashup paradigm enables the creation of innovative Web applications with emergent features, its openness introduces trust problems. These trust issues are particularly prominent in JavaScript code mashup -- a type of mashup that integrated external Javascript libraries to achieve function and software reuse. With JavaScript code mashup, external libraries are usually given full privileges to manipulate data of the mashup application and executing arbitrary code. This imposes considerable risk on the mashup developers and the end users. One major causes for these trust problems is that the mashup developers tend to focus on the functional aspects of the application and implicitly trust the external code libraries to satisfy security, privacy and other non-functional requirements. In this paper, we present ToMaTo, a development tool that combines a novel trust policy language and a static code analysis engine to examine whether the external libraries satisfy the non-functional requirements. ToMaTo gives the mashup developers three essential capabilities for building trustworthy JavaScript code mashup: (1) to specify trust policy, (2) to assess policy adherence, and (3) to handle policy violation. The contributions of the paper are: (1) a description of JavaScript code mashup and its trust issues, and (2) a development tool (ToMaTo) for building trustworthy JavaScript code mashup.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "hc2e7kWQhfE", "year": null, "venue": "ECML 1998", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=hc2e7kWQhfE", "arxiv_id": null, "doi": null }
{ "title": "Pruning Decision Trees with Misclassification Costs", "authors": [ "Jeffrey P. Bradford", "Clayton Kunz", "Ron Kohavi", "Clifford Brunk", "Carla E. Brodley" ], "abstract": "We describe an experimental study of pruning methods for decision tree classifiers when the goal is minimizing loss rather than error. In addition to two common methods for error minimization, CART's cost-complexity pruning and C4.5's error-based pruning, we study the extension of cost-complexity pruning to loss and one pruning variant based on the Laplace correction. We perform an empirical comparison of these methods and evaluate them with respect to loss. We found that applying the Laplace correction to estimate the probability distributions at the leaves was beneficial to all pruning methods. Unlike in error minimization, and somewhat surprisingly, performing no pruning led to results that were on par with other methods in terms of the evaluation criteria. The main advantage of pruning was in the reduction of the decision tree size, sometimes by a factor of ten. While no method dominated others on all datasets, even for the same domain different pruning mechanisms are better for different loss matrices.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "l50QMlxlDwc", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=l50QMlxlDwc", "arxiv_id": null, "doi": null }
{ "title": "A Multimodal Advertising Generation Dataset", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "WtJqSvBiwGjV", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=WtJqSvBiwGjV", "arxiv_id": null, "doi": null }
{ "title": "Machine learning and SLIC for Tree Canopies segmentation in urban areas", "authors": [ "José Augusto Correa Martins", "Geazy Menezes", "Wesley Nunes Gonçalves", "Diego André Sant'Ana", "Lucas Prado Osco", "Veraldo Liesenberg", "Jonathan Li", "Lingfei Ma", "Paulo Tarso Sanches de Oliveira", "Gilberto Astolfi", "Hemerson Pistori", "José Marcato Junior" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "l4lmjzj7NV", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=l4lmjzj7NV", "arxiv_id": null, "doi": null }
{ "title": "Desert bighorn sheep (Ovis canadensis) recognition from camera traps based on learned features", "authors": [ "Manuel Vargas-Felipe", "Luis Pellegrin", "Aldo A. Guevara-Carrizales", "Adrián Pastor López-Monroy", "Hugo Jair Escalante", "José Ángel González-Fraga" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xLx1OJPDfSs", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=xLx1OJPDfSs", "arxiv_id": null, "doi": null }
{ "title": "Overcoming the distance estimation bottleneck in estimating animal abundance with camera traps", "authors": [ "Timm Haucke", "Hjalmar S. Kühl", "Jacqueline Hoyer", "Volker Steinhage" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qu4N81swGGW", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=qu4N81swGGW", "arxiv_id": null, "doi": null }
{ "title": "A model for environmental data extraction from multimedia and its evaluation against various chemical weather forecasting datasets", "authors": [ "Anastasia Moumtzidou", "Victor Epitropou", "Stefanos Vrochidis", "Kostas D. Karatzas", "Sascha Voth", "Anastasios Bassoukos", "Jürgen Moßgraber", "Ari Karppinen", "Jaakko Kukkonen", "Ioannis Kompatsiaris" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "OEB8kA0Jon", "year": null, "venue": "Ecol. Informatics 2023", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=OEB8kA0Jon", "arxiv_id": null, "doi": null }
{ "title": "Image patch-based deep learning approach for crop and weed recognition", "authors": [ "A. S. M. Mahmudul Hasan", "Dean Diepeveen", "Hamid Laga", "Michael G. K. Jones", "Ferdous Sohel" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yStelgOI_Q", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=yStelgOI_Q", "arxiv_id": null, "doi": null }
{ "title": "Multi-output regression with structurally incomplete target labels: A case study of modelling global vegetation cover", "authors": [ "Rita Beigaite", "Jesse Read", "Indre Zliobaite" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "BDrxd438ka", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=BDrxd438ka", "arxiv_id": null, "doi": null }
{ "title": "Redescription mining for analyzing local limiting conditions: A case study on the biogeography of large mammals in China and southern Asia", "authors": [ "Esther Galbrun", "Hui Tang", "Anu Kaakinen", "Indre Zliobaite" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0fVzMWwFlYYq", "year": null, "venue": "Ecol. Informatics 2013", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=0fVzMWwFlYYq", "arxiv_id": null, "doi": null }
{ "title": "Dealing with spatial autocorrelation when learning predictive clustering trees", "authors": [ "Daniela Stojanova", "Michelangelo Ceci", "Annalisa Appice", "Donato Malerba", "Saso Dzeroski" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "8ofmoP7gyHL", "year": null, "venue": "Ecol. Informatics 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=8ofmoP7gyHL", "arxiv_id": null, "doi": null }
{ "title": "Deriving vegetation indices for phenology analysis using genetic programming", "authors": [ "Jurandy Almeida", "Jefersson Alex dos Santos", "Waner O. Miranda", "Bruna Alberton", "Leonor Patricia C. Morellato", "Ricardo da Silva Torres" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bJpLkDvo_P", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=bJpLkDvo_P", "arxiv_id": null, "doi": null }
{ "title": "Using phenological cameras to track the green up in a cerrado savanna and its on-the-ground validation", "authors": [ "Bruna Alberton", "Jurandy Almeida", "Raimund Helm", "Ricardo da Silva Torres", "Annette Menzel", "Leonor Patricia C. Morellato" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "v8yUextvHX", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=v8yUextvHX", "arxiv_id": null, "doi": null }
{ "title": "Applying machine learning based on multiscale classifiers to detect remote phenology patterns in Cerrado savanna trees", "authors": [ "Jurandy Almeida", "Jefersson Alex dos Santos", "Bruna Alberton", "Ricardo da Silva Torres", "Leonor Patricia C. Morellato" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rDaLJXgxSkT", "year": null, "venue": "Ecol. Informatics 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=rDaLJXgxSkT", "arxiv_id": null, "doi": null }
{ "title": "Evaluating machine-learning techniques for recruitment forecasting of seven North East Atlantic fish species", "authors": [ "Jose A. Fernandes", "Xabier Irigoien", "José Antonio Lozano", "Iñaki Inza", "Nerea Goikoetxea", "Aritz Pérez Martínez" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "II1Oa13Exb", "year": null, "venue": "Ecol. Informatics 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=II1Oa13Exb", "arxiv_id": null, "doi": null }
{ "title": "Towards better volcanic risk-assessment systems by applying ensemble classification methods to triaxial seismic-volcanic signals", "authors": [ "Mauricio Orozco-Alzate", "John Makario Londoño-Bonilla", "Valentina Nale", "Manuele Bicego" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "w-Cx-xfRzhY", "year": null, "venue": "Ecol. Informatics 2023", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=w-Cx-xfRzhY", "arxiv_id": null, "doi": null }
{ "title": "Automated wildlife image classification: An active learning tool for ecological applications", "authors": [ "Ludwig Bothmann", "Lisa Wimmer", "Omid Charrakh", "Tobias Weber", "Hendrik Edelhoff", "Wibke Peters", "Hien Nguyen", "Caryl Benjamin", "Annette Menzel" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "tuxGEkQ4GaV", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=tuxGEkQ4GaV", "arxiv_id": null, "doi": null }
{ "title": "Field-derived relationships between fish habitat distribution and flow-sediment conditions in fluctuating backwater zone of the Three Gorges Reservoir", "authors": [ "Shengfa Yang", "Guanbing Xu", "Li Wang", "Wei Yang", "Yi Xiao", "Wenjie Li", "Jiang Hu" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LELeOOjuZh8", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=LELeOOjuZh8", "arxiv_id": null, "doi": null }
{ "title": "Editorial for the special issue: Satellite imagery analysis and mapping for urban ecology", "authors": [ "Vicente García-Díaz", "Jerry Chun-Wei Lin", "Juan Antonio Morente-Molinera" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "7S8P0D-7TaP", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=7S8P0D-7TaP", "arxiv_id": null, "doi": null }
{ "title": "Group behavior tracking of Daphnia magna based on motion estimation and appearance models", "authors": [ "Zhitao Wang", "Chunlei Xia", "JangMyung Lee" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "zrfbLlcAKpy", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=zrfbLlcAKpy", "arxiv_id": null, "doi": null }
{ "title": "Multimedia information retrieval and environmental monitoring: Shared perspectives on data fusion", "authors": [ "Alan F. Smeaton", "Edel O'Connor", "Fiona Regan" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_w_h9rufHQ", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=_w_h9rufHQ", "arxiv_id": null, "doi": null }
{ "title": "Soundscape segregation based on visual analysis and discriminating features", "authors": [ "Fábio Felix Dias", "Hélio Pedrini", "Rosane Minghim" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "4ghTuoIYHFq", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=4ghTuoIYHFq", "arxiv_id": null, "doi": null }
{ "title": "Automated distance estimation for wildlife camera trapping", "authors": [ "Peter Johanns", "Timm Haucke", "Volker Steinhage" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ih92IxXHtmc", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=ih92IxXHtmc", "arxiv_id": null, "doi": null }
{ "title": "Identifying wildlife observations on twitter", "authors": [ "Thomas Edwards", "Christopher B. Jones", "Padraig Corcoran" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "jzHuvi19wk", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=jzHuvi19wk", "arxiv_id": null, "doi": null }
{ "title": "Comparing semantically-blind and semantically-aware landscape similarity measures with application to query-by-content and regionalization", "authors": [ "Tomasz F. Stepinski", "Joseph Paul Cohen" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "RrykPyziMU-", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=RrykPyziMU-", "arxiv_id": null, "doi": null }
{ "title": "An automated deep learning based satellite imagery analysis for ecology management", "authors": [ "Haya Mesfer Alshahrani", "Fahd N. Al-Wesabi", "Mesfer Al Duhayyim", "Nadhem Nemri", "Seifedine Nimer Kadry", "Bassam A. Y. Alqaralleh" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "94dck_qEDyP", "year": null, "venue": "Ecol. Informatics 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=94dck_qEDyP", "arxiv_id": null, "doi": null }
{ "title": "Spectrogram-frame linear network and continuous frame sequence for bird sound classification", "authors": [ "Xin Zhang", "Aibin Chen", "Guoxiong Zhou", "Zhiqiang Zhang", "Xibei Huang", "Xiaohu Qiang" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "tKF9HZRTYB", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=tKF9HZRTYB", "arxiv_id": null, "doi": null }
{ "title": "Pine pest detection using remote sensing satellite images combined with a multi-scale attention-UNet model", "authors": [ "Wujian Ye", "Junming Lao", "Yijun Liu", "Chin-Chen Chang", "Ziwen Zhang", "Hui Li", "Huihui Zhou" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "A2GcWm7G42", "year": null, "venue": "Ecol. Informatics 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=A2GcWm7G42", "arxiv_id": null, "doi": null }
{ "title": "Horizontal visibility of an underwater low-resolution video camera modeled by practical parameters near the sea surface", "authors": [ "Takero Yoshida", "Yoichi Mizukami", "Jinxin Zhou", "Daisuke Kitazawa" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "n9lsTAJXYK", "year": null, "venue": "Ecol. Informatics 2015", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=n9lsTAJXYK", "arxiv_id": null, "doi": null }
{ "title": "Topographical effects of climate data and their impacts on the estimation of net primary productivity in complex terrain: A case study in Wuling mountainous area, China", "authors": [ "Qing-ling Sun", "Xian-feng Feng", "Yong Ge", "Bao-Lin Li" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "4kJCTekmBBq", "year": null, "venue": "Ecol. Informatics 2008", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=4kJCTekmBBq", "arxiv_id": null, "doi": null }
{ "title": "A reference business process for ecological niche modelling", "authors": [ "Fabiana Soares Santana", "Marinez Ferreira de Siqueira", "Antonio Mauro Saraiva", "Pedro Luiz Pizzigatti Corrêa" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9KEHYciq16", "year": null, "venue": "Ecol. Informatics 2010", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=9KEHYciq16", "arxiv_id": null, "doi": null }
{ "title": "Ecological niche modeling and geographical distribution of pollinator and plants: A case study of Peponapis fervens (Smith, 1879) (Eucerini: Apidae) and Cucurbita species (Cucurbitaceae)", "authors": [ "Tereza C. Giannini", "Antonio Mauro Saraiva", "Isabel Alves-dos-Santos" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "gnR-ursHPuZ", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=gnR-ursHPuZ", "arxiv_id": null, "doi": null }
{ "title": "A reference process for automating bee species identification based on wing images and digital image processing", "authors": [ "Fabiana Soares Santana", "Anna Helena Reali Costa", "Flavio Sales Truzzi", "Felipe Leno da Silva", "Sheila L. Santos", "Tiago Mauricio Francoy", "Antonio Mauro Saraiva" ], "abstract": "Highlights • A reference process to implement automated species identification was proposed. • The identification was achieved by applying digital image processing techniques. • Experiments were conducted to evaluate the process and the techniques. • The results show that the process and the techniques are effective and accurate. • They can be extended to other species identification and taxonomic classification. • The process may also be helpful as a guide for beginners in this research field. Abstract Pollinators play a key role in biodiversity conservation, since they provide vital services to both natural ecosystems and agriculture. In particular, bees are excellent pollinators; therefore, their mapping, classification, and preservation help to promote biodiversity conservation. However, these tasks are difficult and time consuming since there is a lack of classification keys, sampling efforts and trained taxonomists. The development of tools for automating and assisting the identification of bee species represents an important contribution to biodiversity conservation. Several studies have shown that features extracted from patterns of bee wings are good discriminatory elements to differentiate among species, and some have devoted efforts to automate this process. However, the automated identification of bee species is a particularly hard problem, because (i) individuals of a given species may vary hugely in morphology, and (ii) closely related species may be extremely similar to one another. This paper proposes a reference process for bee classification based on wing images to provide a complete understanding of the problem from the experts' point of view, and a foundation to software systems development and integration using Internet services. The results can be extended to other species identification and taxonomic classification, as long as similar criteria are applicable. The reference process may also be helpful for beginners in this research field, as they can use the process and the experiments presented here as a guide to this complex activity.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lfKVVIm7xih", "year": null, "venue": "Ecol. Informatics 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=lfKVVIm7xih", "arxiv_id": null, "doi": null }
{ "title": "Real-world plant species identification based on deep convolutional neural networks and visual attention", "authors": [ "Qingguo Xiao", "Guangyao Li", "Li Xie", "Qiaochuan Chen" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "XDAzEoXZzWI", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=XDAzEoXZzWI", "arxiv_id": null, "doi": null }
{ "title": "Habitat suitability for wisents in the Carpathians - a model based on presence only data", "authors": [ "Malgorzata Charytanowicz", "Kajetan Perzanowski", "Maciej Januszczak", "Aleksandra Woloszyn-Galeza", "Piotr Kulczycki" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "YDaCZdBwiDf", "year": null, "venue": "Ecol. Informatics 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=YDaCZdBwiDf", "arxiv_id": null, "doi": null }
{ "title": "Handling high-dimensional data in air pollution forecasting tasks", "authors": [ "Diana Domanska", "Szymon Lukasik" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "3crcYX_PMRD", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=3crcYX_PMRD", "arxiv_id": null, "doi": null }
{ "title": "A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage", "authors": [ "Bastiaan Johannes Boom", "Jiyin He", "Simone Palazzo", "Phoenix X. Huang", "Cigdem Beyan", "Hsiu-Mei Chou", "Fang-Pang Lin", "Concetto Spampinato", "Robert B. Fisher" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "tDGXQm5LPl", "year": null, "venue": "Ecol. Informatics 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=tDGXQm5LPl", "arxiv_id": null, "doi": null }
{ "title": "First automatic passive acoustic tool for monitoring two species of procellarides (Pterodroma baraui and Puffinus bailloni) on Reunion Island, Indian Ocean", "authors": [ "Olivier Dufour", "Benoît Gineste", "Yves Bas", "Matthieu Le Corre", "Thierry Artières" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "1f9C5zfLzI", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=1f9C5zfLzI", "arxiv_id": null, "doi": null }
{ "title": "An efficient oil content estimation technique using microscopic microalgae images", "authors": [ "Rakesh Chandra Joshi", "Saumya Dhup", "Nutan Kaushik", "Malay Kishore Dutta" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9-vr-oeq6c", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=9-vr-oeq6c", "arxiv_id": null, "doi": null }
{ "title": "VirLeafNet: Automatic analysis and viral disease diagnosis using deep-learning in Vigna mungo plant", "authors": [ "Rakesh Chandra Joshi", "Manoj Kaushik", "Malay Kishore Dutta", "Ashish Srivastava", "Nandlal Choudhary" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "BXfrbd-Z_I", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=BXfrbd-Z_I", "arxiv_id": null, "doi": null }
{ "title": "Dense convolutional neural networks based multiclass plant disease detection and classification using leaf images", "authors": [ "Vaibhav Tiwari", "Rakesh Chandra Joshi", "Malay Kishore Dutta" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AI-_zztvx3c", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=AI-_zztvx3c", "arxiv_id": null, "doi": null }
{ "title": "Computer vision technique for freshness estimation from segmented eye of fish image", "authors": [ "Anamika Banwari", "Rakesh Chandra Joshi", "Namita Sengar", "Malay Kishore Dutta" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "q9mnsALzLF", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=q9mnsALzLF", "arxiv_id": null, "doi": null }
{ "title": "Seismic signal analysis for the characterisation of elephant movements in a forest environment", "authors": [ "D. S. Parihar", "Ripul Ghosh", "Aparna Akula", "Satish Kumar", "Harish Kumar Sardana" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Cuvcxi7o2Ke", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=Cuvcxi7o2Ke", "arxiv_id": null, "doi": null }
{ "title": "A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage", "authors": [ "Bastiaan Johannes Boom", "Jiyin He", "Simone Palazzo", "Phoenix X. Huang", "Cigdem Beyan", "Hsiu-Mei Chou", "Fang-Pang Lin", "Concetto Spampinato", "Robert B. Fisher" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "B2xxkhFTB4", "year": null, "venue": "Ecol. Informatics 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=B2xxkhFTB4", "arxiv_id": null, "doi": null }
{ "title": "Habitat-Net: Segmentation of habitat images using deep learning", "authors": [ "Jesse F. Abrams", "Anand Vashishtha", "Seth T. Wong", "An Nguyen", "Azlan Mohamed", "Sebastian Wieser", "Arjan Kuijper", "Andreas Wilting", "Anirban Mukhopadhyay" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PeB4g22Kl9E", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=PeB4g22Kl9E", "arxiv_id": null, "doi": null }
{ "title": "Evaluation of water quality based on UAV images and the IMP-MPP algorithm", "authors": [ "Hanting Ying", "Kai Xia", "Xinxi Huang", "Hailin Feng", "Yinhui Yang", "Xiaochen Du", "Leijun Huang" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "azOKuJhNEJ2", "year": null, "venue": "Ecol. Informatics 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=azOKuJhNEJ2", "arxiv_id": null, "doi": null }
{ "title": "An open 3D CFD model for the investigation of flow environments experienced by freshwater fish", "authors": [ "Ali Hassan Khan", "Karla Ruiz Hussmann", "Dennis Powalla", "Stefan Hoerner", "Maarja Kruusmaa", "Jeffrey A. Tuhtan" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "gQQwKWQCpxA", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=gQQwKWQCpxA", "arxiv_id": null, "doi": null }
{ "title": "Artificial lateral line for aquatic habitat modelling: An example for Lefua echigonia", "authors": [ "Ana García-Vega", "Juan Francisco Fuentes-Perez", "Shinji Fukuda", "Maarja Kruusmaa", "Francisco Javier Sanz-Ronda", "Jeffrey A. Tuhtan" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "wr0BM2g_t4", "year": null, "venue": "Ecol. Informatics 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=wr0BM2g_t4", "arxiv_id": null, "doi": null }
{ "title": "Enhancing the dissimilarity-based classification of birdsong recordings", "authors": [ "José Francisco Ruiz-Muñoz", "Germán Castellanos-Domínguez", "Mauricio Orozco-Alzate" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "HxZwIJXvQl9", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=HxZwIJXvQl9", "arxiv_id": null, "doi": null }
{ "title": "Editorial - Special issue on multimedia in ecology", "authors": [ "Concetto Spampinato", "Vasileios Mezaris", "Benoit Huet", "Jacco van Ossenbruggen" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SgVwLOEWgq", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=SgVwLOEWgq", "arxiv_id": null, "doi": null }
{ "title": "Acquisition of 3-D trajectories with labeling support for multi-species insects under unconstrained flying conditions", "authors": [ "Abdul Nasir", "Muhammad Obaid Ullah", "Muhammad Haroon Yousaf", "Muhammad Asif Aziz" ], "abstract": "Highlights • Acquisition of multimodal data set (about 1900 IR, RGB and, depth videos) having flying instances of honey bee, Apis mellifera and Vespa spp. • Introduction of a new technique for detection and tracking of flying insects using only valid depth map data. • A new trajectory defragmentation scheme based upon flying insects' spatio-temporal correlation, speed, and physical size. • Object localization in color frames for trajectory labeling based upon analysis of insects' flying kinematics and 3-D location coordinates. • Implementation of a novel data validation strategy to estimate error upper bound in trajectory measurements. Abstract In the work presented here, a new technique based upon stereo vision is proposed to acquire three-dimensional time-resolved trajectories with labeling support for multi-species insects under unconstrained flying conditions. A low-cost, off-the-shelf depth camera is used for stereo vision which is equipped with two wide-angle global-shutter IR imagers to compute depth map and a separate narrow-angle color imager to have color and texture information. Two novel strategies are employed to tackle the challenges imposed by the small size of insects, fast movements, and natural daylight, etc.; (i) introduction of a simple but robust technique for detection and tracking of insects using only the valid depth map data and subsequently to acquire 3-D trajectories (ii) object localization scheme for the insects in acquired trajectories in allied color frames based upon the analysis of insects' flying kinematics and their 3-D location coordinates. The object localization of flying insects in allied color frames provides support to label the multi-spp. trajectories with respective types of the species. The proposed technique provides completely generalized solution to acquire 3-D trajectories of the multi-spp. insects in natural outdoor conditions and in the presented work it is applied to acquire the trajectories of honey bees (Apis mellifera) and invasive hornets (Vespa spp.) near beehives. These hornets are serious honey bee stressors and cause severe damage to the hive's foraging force. The trajectory patterns of honey bees together with Vespa spp. can be analyzed for early detection of Vespa spp. near beehives as well as to estimate stress levels on the honey bees. Acquired 3-D trajectory data are validated for trajectory measurements as well as for the labeling support information. A mean error upper bound of 5 mm is estimated in trajectory measurements whereas 100% inter-spp. and up to 94% intra-spp. labeling support accuracies are recorded.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lFeeTw5xxPU", "year": null, "venue": "Ecol. Informatics 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=lFeeTw5xxPU", "arxiv_id": null, "doi": null }
{ "title": "A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage", "authors": [ "Bastiaan Johannes Boom", "Jiyin He", "Simone Palazzo", "Phoenix X. Huang", "Cigdem Beyan", "Hsiu-Mei Chou", "Fang-Pang Lin", "Concetto Spampinato", "Robert B. Fisher" ], "abstract": "We present a research tool that supports marine ecologists' research by allowing analysis of long-term and continuous fish monitoring video content. The analysis can be used for instance to discover ecological phenomena such as changes in fish abundance and species composition over time and area. Two characteristics set our system apart from traditional ecological data collecting and processing methods. First, the continuous video recording results in enormous data volumes of monitoring data. Currently around a year of video recordings (containing over the 4 million fish observations) have been processed. Second, different from traditional manual recording and analysing the ecological data, the whole recording, analysing and presentation of results is automated in this system. On one hand, it saves the effort of manually examining every video, which is infeasible. On the other hand, no automatic video analysis method is perfect, so the user interface provides marine ecologists with multiple options to verify the data. Marine ecologists can examine the underlying videos, check results of automatic video analysis at different certainty levels computed by our system, and compare results generated by multiple versions of automatic video analysis software to verify the data in our system. This research tool enables marine ecologists for the first time to analyse long-term and continuous underwater video records.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0z9HqmoonYw", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=0z9HqmoonYw", "arxiv_id": null, "doi": null }
{ "title": "The Peruvian Amazon forestry dataset: A leaf image classification corpus", "authors": [ "Gerson Vizcarra", "Danitza Bermejo", "Antoni Mauricio", "Ricardo Zarate Gomez", "Erwin Dianderas" ], "abstract": "Highlights • The first Peruvian Amazon Forestry Dataset, containing 59,441 leaf images. • Transfer learning classification using AlexNet, VGG-19, ResNet-101, and DenseNet-201. • AlexNet and VGG-19 outperform the results of ResNet-101 and DenseNet-201. • Training without background removal yields more robust models. • Visual interpretation of models indicates that shape and venation are the most trustworthy features. Abstract Forest census allows getting precise data for logging planning and elaboration of the forest management plan. Species identification blunders carry inadequate forest management plans and high risks inside forest concessions. Hence, an identification protocol prevents the exploitation of non-commercial or endangered timber species. The current Peruvian legislation allows the incorporation of non-technical experts, called “materos”, during the identification. Materos use common names given by the folklore and traditions of their communities instead of formal ones, which generally lead to misclassifications. In the real world, logging companies hire materos instead of botanists due to cost/time limitations. Given such a motivation, we explore an end-to-end software solution to automatize the species identification. This paper introduces the Peruvian Amazon Forestry Dataset, which includes 59,441 leaves samples from ten of the most profitable and endangered timber-tree species. The proposal contemplates a background removal algorithm to feed a pre-trained CNN by the ImageNet dataset. We evaluate the quantitative (accuracy metric) and qualitative (visual interpretation) impacts of each stage by ablation experiments. The results show a 96.64% training accuracy and 96.52% testing accuracy on the VGG-19 model. Furthermore, the visual interpretation of the model evidences that leaf venations have the highest correlation in the plant recognition task.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "U8f2flEHvQS", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=U8f2flEHvQS", "arxiv_id": null, "doi": null }
{ "title": "Detection of foraging behavior from accelerometer data using U-Net type convolutional networks", "authors": [ "Manh Cuong Ngô", "Raghavendra Selvan", "Outi Tervo", "Mads Peter Heide-Jørgensen", "Susanne Ditlevsen" ], "abstract": "Highlights • Narwhals may not always make big jerks when foraging like harbor porpoises or sperm whales, hence their hunting pattern might differ from them. • Reliable buzz detectors are derived from high-frequency-sampling, back-mounted accelerometer using state-of-the-art machine learning algorithms. • Deep learning is showed to be a superior algorithm to learn patterns from accelerometer data. Abstract Narwhal (Monodon monoceros) is one of the most elusive marine mammals, due to its isolated habitat in the Arctic region. Tagging is a technology that has the potential to explore the activities of this species, where behavioral information can be collected from instrumented individuals. This includes accelerometer data, diving and acoustic data as well as GPS positioning. An essential element in understanding the ecological role of toothed whales is to characterize their feeding behavior and estimate the amount of food consumption. Buzzes are sounds emitted by toothed whales that are related directly to the foraging behaviors. It is therefore of interest to measure or estimate the rate of buzzing to estimate prey intake. The main goal of this paper is to find a way to detect prey capture attempts directly from accelerometer data, and thus be able to estimate food consumption without the need for the more demanding acoustic data. We develop three automated buzz detection methods based on accelerometer and depth data solely. We use a dataset from five narwhals instrumented in East Greenland in 2018 to train, validate and test a logistic regression model and the state-of-the art machine learning algorithms random forest and deep learning, using the buzzes detected from acoustic data as the ground truth. The deep learning algorithm performed best among the tested methods. We conclude that reliable buzz detectors can be derived from high-frequency-sampling, back-mounted accelerometer tags, thus providing an alternative tool for studies of foraging ecology of marine mammals in their natural environments. We also compare buzz detection with certain movement patterns, such as sudden changes in acceleration (jerks), found in other marine mammal species for estimating prey capture. We find that narwhals do not seem to make big jerks when foraging and conclude that their hunting patterns in that respect might differ from other marine mammals.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "6pX4RIrakW", "year": null, "venue": "Ecol. Informatics 2010", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=6pX4RIrakW", "arxiv_id": null, "doi": null }
{ "title": "Estimating vegetation height and canopy cover from remotely sensed data with machine learning", "authors": [ "Daniela Stojanova", "Pance Panov", "Valentin Gjorgjioski", "Andrej Kobler", "Saso Dzeroski" ], "abstract": "High quality information on forest resources is important to forest ecosystem management. Traditional ground measurements are labor and resource intensive and at the same time expensive and time consuming. For most of the Slovenian forests, there is extensive ground-based information on forest properties of selected sample locations. However there is no continuous information of objectively measured vegetation height and canopy cover at appropriate resolution. Currently, Light Detection And Ranging (LiDAR) technology provides detailed measurements of different forest properties because of its immediate generation of 3D data, its accuracy and acquisition flexibility. However, existing LiDAR sensors have limited spatial coverage and relatively high cost of acquisition. Satellite data, on the other hand, are low-cost and offer broader spatial coverage of generalized forest structure, but are not expected to provide accurate information about vegetation height. Integration of LiDAR and satellite data promises to improve the measurement, mapping, and monitoring of forest properties. The primary objective of this study is to model the vegetation height and canopy cover in Slovenia by integrating LiDAR data, Landsat satellite data, and the use of machine learning techniques. This kind of integration uses the accuracy and precision of LiDAR data and the wide coverage of satellite data in order to generate cost-effective realistic estimates of the vegetation height and canopy cover, and consequently generate continuous forest vegetation map products to be used in forest management and monitoring. Several machine learning techniques are applied to this task: they are evaluated and their performance is compared by using statistical significance tests. Ensemble methods perform significantly better than single- and multi-target regression trees and are further used for the generation of forest maps. Such maps are used for land-cover and land-use classification, as well as for monitoring and managing ongoing forest processes (like spontaneous afforestation, forest reduction and forest fires) that affect the stability of forest ecosystems.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "-sIv8cEX6Ec", "year": null, "venue": "Ecol. Informatics 2009", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=-sIv8cEX6Ec", "arxiv_id": null, "doi": null }
{ "title": "Variational learning for Generalized Associative Functional Networks in modeling dynamic process of plant growth", "authors": [ "Han-Bing Qu", "Bao-Gang Hu" ], "abstract": "This paper presents a new statistical techniques — Bayesian Generalized Associative Functional Networks (GAFN), to model the dynamical plant growth process of greenhouse crops. GAFNs are able to incorporate the domain knowledge and data to model complex ecosystem. By use of the functional networks and Bayesian framework, the prior knowledge can be naturally embedded into the model, and the functional relationship between inputs and outputs can be learned during the training process. Our main interest is focused on the Generalized Associative Functional Networks (GAFNs), which are appropriate to model multiple variable processes. Three main advantages are obtained through the applications of Bayesian GAFN methods to modeling dynamic process of plant growth. Firstly, this approach provides a powerful tool for revealing some useful relationships between the greenhouse environmental factors and the plant growth parameters. Secondly, Bayesian GAFN can model Multiple-Input Multiple-Output (MIMO) systems from the given data, and presents a good generalization capability from the final single model for successfully fitting all 12 data sets over 5-year field experiments. Thirdly, the Bayesian GAFN method can also play as an optimization tool to estimate the interested parameter in the agro-ecosystem. In this work, two algorithms are proposed for the statistical inference of parameters in GAFNs. Both of them are based on the variational inference, also called variational Bayes (VB) techniques, which may provide probabilistic interpretations for the built models. VB-based learning methods are able to yield estimations of the full posterior probability of model parameters. Synthetic and real-world examples are implemented to confirm the validity of the proposed methods.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "MUGv3UrCF-6", "year": null, "venue": "Ecol. Informatics 2021", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=MUGv3UrCF-6", "arxiv_id": null, "doi": null }
{ "title": "Large-scale zero-shot learning in the wild: Classifying zoological illustrations", "authors": [ "Lise Stork", "Andreas Weber", "H. Jaap van den Herik", "Aske Plaat", "Fons J. Verbeek", "Katherine Wolstencroft" ], "abstract": "Highlights • A zero-shot prototypical learning approach was proposed to deal with the limited availability of training data. • Methods to include knowledge from a variable number of multimodal sources in single prototypes were compared. • Effects of training the proposed model with hierarchical prototype loss were measured. • The ZICE (Zoological Illustrations and Class Embeddings) dataset, created from multi-modal background knowledge, was introduced and used to test the proposed model. • The performance of the proposed model was analysed qualitatively on real-world data. Abstract In this paper we analyse the classification of zoological illustrations. Historically, zoological illustrations were the modus operandi for the documentation of new species, and now serve as crucial sources for long-term ecological and biodiversity research. By employing computational methods for classification, the data can be made amenable to research. Automated species identification is challenging due to the long-tailed nature of the data, and the millions of possible classes in the species taxonomy. Success commonly depends on large training sets with many examples per class, but images from only a subset of classes are digitally available, and many images are unlabelled, since labelling requires domain expertise. We explore zero-shot learning to address the problem, where features are learned from classes with medium to large samples, which are then transferred to recognise classes with few or no training samples. We specifically explore how distributed, multi-modal background knowledge from data providers, such as the Global Biodiversity Information Facility (GBIF), iNaturalist, and the Biodiversity Heritage Library (BHL), can be used to share knowledge between classes for zero-shot learning. We train a prototypical network for zero-shot classification, and introduce fused prototypes (FP) and hierarchical prototype loss (HPL) to optimise the model. Finally, we analyse the performance of the model for use in real-world applications. The experimental results are encouraging, indicating potential for use of such models in an expert support system, but also express the difficulty of our task, showing a necessity for research into computer vision methods that are able to learn from small samples.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "YDoWzbEiuNE", "year": null, "venue": "Ecol. Informatics 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=YDoWzbEiuNE", "arxiv_id": null, "doi": null }
{ "title": "Cross-site learning in deep learning RGB tree crown detection", "authors": [ "Ben Weinstein", "Sergio Marconi", "Stephanie A. Bohlman", "Alina Zare", "Ethan P. White" ], "abstract": "Highlights • Geographic variation is a central challenge in airborne tree crown detection. • We tested prediction among forests using an RGB deep learning model and LiDAR derived pretraining. • We found local site models can be pretrained from different geographic areas with minimal performance loss. • A universal model of all sites performed better than individual models trained for each local site. Abstract Tree crown detection is a fundamental task in remote sensing for forestry and ecosystem ecology. While many individual tree segmentation algorithms have been proposed, the development and testing of these algorithms is typically site specific, with few methods evaluated against data from multiple forest types simultaneously. This makes it difficult to determine the generalization of proposed approaches, and limits tree detection at broad scales. Using data from the National Ecological Observatory Network, we extend a recently developed deep learning approach to include data from a range of forest types to determine whether information from one forest can be used for tree detection in other forests, and explore the potential for building a universal tree detection algorithm. We find that the deep learning approach works well for overstory tree detection across forest conditions. Performance was best in open oak woodlands and worst in alpine forests. When models were fit to one forest type and used to predict another, performance generally decreased, with better performance when forests were more similar in structure. However, when models were pretrained on data from other sites and then fine-tuned using a relatively small amount of hand-labeled data from the evaluation site, they performed similarly to local site models. Most importantly, a model fit to data from all sites performed as well or better than individual models trained for each local site.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ijW1cbApBEc", "year": null, "venue": "Ecol. Informatics 2017", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=ijW1cbApBEc", "arxiv_id": null, "doi": null }
{ "title": "Kálmán filters for continuous-time movement models", "authors": [ "Christen H. Fleming", "Daniel Sheldon", "Eliezer Gurarie", "William F. Fagan", "Scott LaPoint", "Justin M. Calabrese" ], "abstract": "Highlights • A broad class of continuous-time animal movement models is solved. • These solutions are paired with fast Kálmán filter techniques. • Large animal tracking datasets are now amenable to a broad array of rigorous analyses. Abstract We introduce fast implementations for the likelihood functions, telemetry error filters, probabilistic trajectory and velocity reconstructions, and movement-path simulations for a large class of continuous-time movement models. This class of models includes all of the basic continuous-time models that have been applied to animal movement. A diverse array of movement behaviors can be modeled from within this framework, including range residence, persistence of motion, migration, range shifting, and territorial patrol. The fast algorithms presented here, based upon the Kálmán filter, are critical for applying movement analyses to the evergrowing number of modern datasets that feature thousands or more observed animal locations, and they are key to the continuous-time movement modeling (ctmm) R package.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NquYfmwGE4", "year": null, "venue": "ICASSP 2023", "pdf_link": "https://ieeexplore.ieee.org/iel7/10094559/10094560/10097145.pdf", "forum_link": "https://openreview.net/forum?id=NquYfmwGE4", "arxiv_id": null, "doi": null }
{ "title": "E-Branchformer-Based E2E SLU Toward Stop on-Device Challenge", "authors": [ "Yosuke Kashiwagi", "Siddhant Arora", "Hayato Futami", "Jessica Huynh", "Shih-Lun Wu", "Yifan Peng", "Brian Yan", "Emiru Tsunoo", "Shinji Watanabe" ], "abstract": "In this paper, we report our team’s study on track 2 of the Spoken Language Understanding Grand Challenge, which is a component of the ICASSP Signal Processing Grand Challenge 2023. The task is intended for on-device processing and involves estimating semantic parse labels from speech using a model with 15 million parameters. We use E2E E-Branchformer-based spoken language understanding model, which is more parameter controllable than cascade models, and reduced the parameter size through sequential distillation and tensor decomposition techniques. On the STOP dataset, we achieved an exact match accuracy of 70.9% under the tight constraint of 15 million parameters.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SLsbJXWcU7j", "year": null, "venue": "UIST 2019", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=SLsbJXWcU7j", "arxiv_id": null, "doi": null }
{ "title": "Optimizing Portrait Lighting at Capture-Time Using a 360 Camera as a Light Probe", "authors": [ "Jane L. E", "Ohad Fried", "Maneesh Agrawala" ], "abstract": "We present a capture-time tool designed to help casual photographers orient their subject to achieve a user-specified target facial appearance. The inputs to our tool are an HDR environment map of the scene captured using a 360 camera, and a target facial appearance, selected from a gallery of common studio lighting styles. Our tool computes the optimal orientation for the subject to achieve the target lighting using a computationally efficient precomputed radiance transfer-based approach. It then tells the photographer how far to rotate about the subject. Optionally, our tool can suggest how to orient a secondary external light source (e.g. a phone screen) about the subject's face to further improve the match to the target lighting. We demonstrate the effectiveness of our approach in a variety of indoor and outdoor scenes using many different subjects to achieve a variety of looks. A user evaluation suggests that our tool reduces the mental effort required by photographers to produce well-lit portraits.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UROBiQEOLP", "year": null, "venue": "Submitted to ICLR 2023", "pdf_link": "/pdf/b4aeb4ae7db4f697c84669a0909e120b0a92b337.pdf", "forum_link": "https://openreview.net/forum?id=UROBiQEOLP", "arxiv_id": null, "doi": null }
{ "title": "E-Forcing: Improving Autoregressive Models by Treating it as an Energy-Based One", "authors": [ "Yezhen Wang", "Tong Che", "Bo Li", "Kaitao Song", "Hengzhi Pei", "Yoshua Bengio", "Dongsheng Li" ], "abstract": "Autoregressive generative models are commonly used to solve tasks involving sequential data. They have, however, been plagued by a slew of inherent flaws due to the intrinsic characteristics of chain-style conditional modeling (e.g., exposure bias or lack of long-range coherence), severely limiting their ability to model distributions properly. In this paper, we propose a unique method termed E-Forcing for training autoregressive generative models that takes advantage of a well-designed energy-based learning objective. By leveraging the extra degree of freedom of the softmax operation, we are allowed to make the autoregressive model itself an energy-based model for measuring the likelihood of input without introducing any extra parameters. Furthermore, we show that with the help of E-Forcing, we can alleviate the above flaws for autoregressive models. Extensive empirical results, covering numerous benchmarks demonstrate the effectiveness of the proposed approach.", "keywords": [ "autoregressive models", "exposure bias", "language modeling", "neural machine translation" ], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "t2BCddB1SZA", "year": null, "venue": "ARTEL@EC-TEL 2017", "pdf_link": "http://ceur-ws.org/Vol-1997/paper5.pdf", "forum_link": "https://openreview.net/forum?id=t2BCddB1SZA", "arxiv_id": null, "doi": null }
{ "title": "AFEL: Towards Measuring Online Activities Contributions to Self-directed Learning", "authors": [ "Mathieu d'Aquin", "Alessandro Adamou", "Stefan Dietze", "Besnik Fetahu", "Ujwal Gadiraju", "Ilire Hasani-Mavriqi", "Peter Holtz", "Joachim Kimmerle", "Dominik Kowald", "Elisabeth Lex", "Susana López-Sola", "Ricardo Alonso Maturana", "Vedran Sabol", "Pinelopi Troullinou", "Eduardo E. Veas" ], "abstract": null, "keywords": [], "raw_extracted_content": "AFEL: Towards Measuring Online Activities\nContributions to Self-Directed Learning\nMathieu d'Aquin1, Alessandro Adamou2, Stefan Dietze3, Besnik Fetahu3,\nUjwal Gadiraju3, Ilire Hasani-Mavriqi4, Peter Holtz5, Joachim Kimmerle5,\nDominik Kowald4, Elisabeth Lex4, Susana L\u0013 opez Sola6], Ricardo A.\nMaturana6, Vedran Sabol4, Pinelopi Troullinou2, and Eduardo Veas4\n1Insight Centre for Data Analytics, National University of Ireland, Galway\[email protected]\n2Knowledge Media Institute, The Open University, UK\nfalessandro.adamou,pinelopi.troullinou [email protected]\n3L3S Research Center, Leibniz University Hanover, Germany\nfdietze,fetahu,gadiraju [email protected]\n4Know-Center Graz, University of Technology Graz, Austria\nfihasani,dkowald,vsabol,eduveas [email protected],[email protected]\n5The Leibniz-Insititut f ur Wissensmedien, T ubingen, Germany\nfp.holtz,j.kimmerle [email protected] ,\n6GNOSS, Spain\nfsusanalopez,riam [email protected]\nAbstract. More and more learning activities take place online in a self-\ndirected manner. Therefore, just as the idea of self-tracking activities\nfor \ftness purposes has gained momentum in the past few years, tools\nand methods for awareness and self-re\rection on one's own online learn-\ning behavior appear as an emerging need for both formal and informal\nlearners. Addressing this need is one of the key objectives of the AFEL\n(Analytics for Everyday Learning) project. In this paper, we discuss the\ndi\u000berent aspects of what needs to be put in place in order to enable\nawareness and self-re\rection in online learning. We start by describing a\nscenario that guides the work done. We then investigate the theoretical,\ntechnical and support aspects that are required to enable this scenario,\nas well as the current state of the research in each aspect within the\nAFEL project. We conclude with a discussion of the ongoing plans from\nthe project to develop learner-facing tools that enable awareness and self-\nre\rection for online, self-directed learners. We also elucidate the need to\nestablish further research programs on facets of self-tracking for learn-\ning that are necessarily going to emerge in the near future, especially\nregarding privacy and ethics.\n1 Introduction\nMuch of the research on measuring learners' online activities, and to some ex-\ntent much of the research work in Technology-Enhanced Learning, focus on\nthe restricted scenario of students formally engaged in learning (e.g. enrolled in\n2 d'Aquin et al.\na university program) and where online activities happen through a provided\neLearning system. However, whether or not they are formally engaged in learn-\ning, more and more learners are now using a large variety of online platforms\nand resources which are not necessarily connected with their learning environ-\nment or with each other. Such use of online resources tends to be self-directed\nin the sense that learners make their own choices as to which resource to employ\nand which activity to realize amongst the wide choice o\u000bered to them (MOOCs,\ntutorials, open educational resources, etc). With such practices becoming more\ncommon, there is therefore value in researching the way in which to support such\nchoices.\nIn several other areas than learning where self-directed activities are promi-\nnent (e.g. \ftness), there has been a trend in recent years following the techno-\nlogical development of tools for self-tracking [15]. Those tools quantify a speci\fc\nuser's activities with respect to a certain goal (e.g. being physically \ft) to enable\nself-awareness and re\rection, with the purpose of turning them into behavioral\nchanges. While the actual bene\fts of self-tracking in those areas are still debat-\nable, our understanding of how such approaches could bene\ft learning behaviors\nas they become more self-directed remains very limited.\nAFEL7(Analytics for Everyday Learning) is an European Horizon 2020\nproject which aim is to address both the theoretical and technological chal-\nlenges arising from applying learning analytics [6] in the context of online, social\nlearning. The pillars of the project are the technologies to capture large scale,\nheterogeneous data about learner's online activities across multiple platforms\n(including social media) and the operationalization of theoretical cognitive mod-\nels of learning to measure and assess those online learning activities. One of the\nkey planned outcomes of the project is therefore a set of tools enabling self-\ntracking on online learning by a wide range of potential learners to enable them\nto re\rect and ultimately improve the way they focus their learning.\nIn this paper, we discuss the research and development challenges associ-\nated with achieving those goals and describe initial results obtained by the\nproject in three key areas: theory (through cognitive models of learning), tech-\nnology (through data capture, processing and enrichment systems) and support\n(through the features provided to users for visualizing, exploring and drawing\nconclusions from their learning activities). We start by describing a motivating\nscenario of an online, self-directed learner to clarify our objective.\n2 Motivating Scenario\nBelow is a speci\fc scenario considering a learner not formally engaged in a\nspeci\fc study program, but who is, in a self-directed and explicit way, engaged\nin online learning. The objective is to describe in a simple way how the envisioned\nAFEL tools could be used for self-awareness and re\rection, but also to explore\nwhat the expected bene\fts of enabling this for users/learners are:\n7http://afel-project.eu\nAFEL: Measuring Self-Directed Online Learning 3\nJane is 37 and works as an administrative assistant in a local medium-\nsized company. As hobbies, she enjoys sewing and cycling in the local\nforests. She is also interested in business management, and is consider-\ning either developing in her current job to a more senior level or making\na career change. Jane spends a lot of time online at home and at her\njob. She has friends on Facebook with whom she shares and discusses\nlocal places to go cycling, and others with whom she discusses sewing\ntechniques and possible projects, often through sharing YouTube videos.\nJane also follows MOOCs and forums related to business management,\non di\u000berent topics. She often uses online resources such as Wikipedia\nand online magazines. At school, she was not very interested in maths,\nwhich is needed if she wants to progress in her job. She is therefore regis-\ntered on Didactalia8, connecting to resources and communities on maths,\nespecially statistics.\nJane has decided to take her learning seriously: She has registered to\nuse the AFEL dashboard through the Didactalia interface. She has also\ninstalled the AFEL browser extension to include her browsing history,\nas well as the Facebook app. She has not included in her dashboard her\nemails, as they are mostly related to her current job, or Twitter, since\nshe rarely uses it.\nJane looks at the dashboard more or less once a day, as she is prompted by\na noti\fcation from the AFEL smart phone application or from the Face-\nbook app, to see how she has been doing the previous day in her online\nsocial learning. It might for example say \\It looks like you progressed well\nwith sewing yesterday! See how you are doing on other topics...\" Jane,\nas she looks at the dashboard, realizes that she has been focusing a lot on\nher hobbies and procrastinated on the topics she enjoys less, especially\nstatistics. Looking speci\fcally at statistics, she realizes that she almost\nonly works on it on Friday evenings, because she feels guilty of not hav-\ning done much during the week. She also sees that she is not putting\nas much e\u000bort into her learning of statistics as other learners, and not\nmaking as much progress. She therefore makes a conscious decision to\nput more focus on it. She adds new goals on the dashboard of the form\n\\Work on statistics during my lunch break every week day\" or \\Have\nachieved a 10% progress compared to now by the same time next week\".\nThe dashboard will remind her of how she is doing against those goals as\nshe goes about her usual online social learning activities. She also gets\nrecommendations of things to do on Didactalia and Facebook based on\nthe indicators shown on the dashboard and her stated goals.\nWhile this is obviously a \fctitious scenario, which is very much simpli\fed, it\nshows the way tools for awareness and self-re\rection can support online self-\ndirected learning, and it provides a basis to investigate the challenges to address\nin order to enable the development of tools of the kind that are described, as\ndiscussed in the rest of this paper.\n8http://didactalia.net\n4 d'Aquin et al.\n3 Theoretical Challenge: Measuring Self-Directed\nLearning\nOne result of the advent of the Internet as a mass phenomenon was a slight\nchange in our understanding of constructs such as \\knowledge\" and \\learning\".\nIn such contexts as described above, it is by no means a trivial task to identify\nand to assess learning. Indeed, in order to understand how learning emerges from\na collection of disparate online activities, we need to get back to fundamental,\ncognitive models of learning, as we cannot make the assumption that usual ways\nto test the results of learning are available.\nTraditionally, the acquisition metaphor was frequently used to describe learn-\ning processes [19]: From this perspective, learning consists in the accumulation of\n\\basic units of knowledge\" within the \\container\" (p. 5) of the human mind. Al-\nready before the digital age, there was also an alternative, more socially oriented\nunderstanding of learning, which is endowed in the participation metaphor: Here,\nknowing is equaled to taking up and getting used to the customs and habits of\na community of practice [10], into which a learner is socialized. Over the last\ntwo decades however, the knowledge construction metaphor has emerged [17]\nas a third important metaphor of learning. Building upon a constructivist un-\nderstanding of learning, the focus lies here on the constant creation and re-\ncreation of knowledge within knowledge construction communities. Knowledge\nis no longer thought of as a rather static entity in form of a \\justi\fed true be-\nlief\"; instead, knowledge is constantly re-negotiated and evolves in a dynamic\nway [16]. In this tradition, the co-evolution model of learning and knowledge\nconstruction [2] treats learning on the side of individuals and knowledge con-\nstruction on the side of communities as two structurally coupled processes (see\nFigure 1). Irritations of a learner's cognitive system in form of new or unex-\npected information that has to be integrated into existing cognitive structures\ncan lead to learning processes in the form of changes in the learner's cogni-\ntive schemas, behavioral scripts, and other cognitive structures. In turn, such\nlearning processes may trigger communication acts by learners within knowl-\nedge construction communities and stimulate further communication processes\nthat lead to the construction of new knowledge. In this model, shared artifacts,\nfor example in form of digital texts such as contributions to wikis or social me-\ndia messages, mediate between the two coupled systems of individual minds and\ncommunicating communities [8].\nWhen talking about learning in digital environments, we can consequently\nde\fne learning as the activity of learners encountering at least partly new infor-\nmation in form of digital artifacts. In principle, every single interaction between\na learner and an artifact can entail learning processes. Learning can either hap-\npen occasionally and accidentally or in the course of planned and at least partly\nstructured learning activities [12]. Planned and structured learning activities can\neither be self-organized or follow to a certain degree a pre-de\fned curriculum of\nlearning activities [13]. In both cases, the related activities will constitute a cer-\ntain learning trajectory [21] which comprises of \\the learning goal, the learning\nactivities, and the thinking and learning in which the students might engage\"\nAFEL: Measuring Self-Directed Online Learning 5\nFig. 1. The dynamic processes of learning and knowledge construction [8] (p. 128).\n(p. 133). Successful learning will result in increases in the learner's abilities and\ncompetencies; for example, successful learners will be able to solve increasingly\ndi\u000ecult tasks or to process increasingly complex learning materials [23].\nBased on these theoretical considerations, the challenge in building tools\nfor self-tracking of online, self-directed learning is to recognize to what extent\nencountering and processing a certain artifact (a resource) induced learning. In\nthe co-evolution model, we assume that what we can measure is the friction (or\nirritation) which triggers internalization processes, i.e. what does the artifact\nbring to the cognitive system that leads to its evolution. At the moment, we\ndistinguish three forms of \\frictions\", leading to three categories of indicators of\nlearning:\n{New concepts and topics: The simplest way in which we can think about\nhow an artifact could lead to learning is through its introduction of new\nknowledge unknown to the learner. This is consistent with the traditional\nacquisition metaphor. In our scenario, this kind of friction happens for exam-\nple when Jane watches a video about a sewing technique previously unknown\nto her.\n{Increased complexity: While not necessarily introducing new concepts, an\nartifact might relate to known concepts in a more complex way, where com-\nplexity might relate to the granularity, speci\fcity or interrelatedness with\nwhich those concepts are treated in the artifact. In a social system, the\nassumption of the co-evolution model is that the interaction between indi-\nviduals might enable such increases in understanding of the concepts being\nconsidered through iteratively re\fning them. In our scenario, this kind of\nfriction happens for example when Jane follows a statistics course which is\nmore advanced than the ones she had encountered before.\n6 d'Aquin et al.\n{New views and opinions: Similarly, known concepts might be introduced \\in\na di\u000berent light\", through varying points of views and opinions enabling a\nre\fnement of the understanding of the concepts treated. This is consistent\nwith the co-evolution model in the sense that it can be seen either as a widen-\ning of the social system in which the learner is involved, or as the integration\ninto di\u000berent social systems. In our scenario, this kind of friction happens\nfor example when Jane reads a critical review of a business management\nmethodology she has been studying.\nWhat appears evident from confronting the co-evolution model and the types\nof indicators described above with the scenario of the previous section is that\nsuch indicators and models should be considered within distinct \\domains\" of\nlearning. Indeed, Jane in the scenario would relate to di\u000berent social systems for\nexample for her interest in sewing, cycling, business management and statistics.\nThe concepts that are relevant, the levels of complexity to consider and the views\nthat can be expressed are also di\u000berent from each other in those domains.\nWe call those domains of learning learning scopes . In the remainder of this\npaper, we will therefore consider a learning scope to be an area or theme of\ninterest to a learner (sewing, business, etc.) to which are attached (consciously\nor not) speci\fc learning goals, as well as a speci\fc set of concepts, topics and\nactivities.\n4 Technical Challenge: Making-Sense of Masses of\nHeterogeneous Activity Data\nConsidering the conclusions from the previous section, the key challenge at the\nintersection of theory and technology for self-tracking of online, self-directed\nlearning is to devise ways to compute the kind of indicators that are useful\nto identify and approximate some quanti\fcation of the three types of frictions\nwithin (implicit/emerging) learning scopes. Before that, however, we have to\nface more basic technical challenges to set in place the mechanisms to collect,\nintegrate, enrich and process the data necessary to compute those indicators.\n4.1 Data capture, integration and enrichment\nThe AFEL project aims at identifying the features that characterize learning\nactivities within online contexts across multiple platforms. With that, we con-\ntribute to the \feld of Social Learning Analytics that is based on the idea that\nnew ideas and skills are not only individual achievements, but also the results of\ninteraction and collaboration [20]. With the rise of the Social Web, online social\nlearning has been facilitated due to the participatory and collaborative nature\nof the Social Web. This has posed several challenges for Learning Analytics: The\n(online) environments where learning activities and related features are to be\ndetected are largely heterogeneous and tend to generate enormous amounts of\ndata concerning user activities that may or may not relate to learning, and even\nAFEL: Measuring Self-Directed Online Learning 7\nwhen they do, the relation is not guaranteed to be explicit. A key issue is that,\neven with an emerging theoretical model, there is no established model for repre-\nsenting the data for learning that can span across all the types of activities that\nmight occur in online environments. With respect to data capture, it may be\nhard to track all relevant learning traces and some indicators such as readership\ndata may be misleading due to switches between the online and o\u000fine world [4].\nTherefore, AFEL adopted an approach to identify reliable data sources and\nto structure their capture process, which is based on an e\u000bort to classify data\nsources, rather than the data themselves. Such an exercise in classi\fcation is\nimportant as it is the result of an e\u000bort to understand what dimensions of the\nactivities through the Web should be captured, before setting out to detect spe-\nci\fc learning activity factors. The resulting taxonomy revolves around a core of\nseven types of entities that a candidate data source has a potential for describ-\ning; these are further speci\fed into sub-categories that capture a speci\fc set of\ndimensions, some of which are common to users and communities (e.g. learning\nstatements), or to users (e.g. indicators of expertise) and learning resources (e.g.\nindicators of popularity). Those categories are at the core of the proposed AFEL\nCore Data Model9, an RDF vocabulary largely based on schema.org and which\nis, amongst other things, used to aggregate the datasets that AFEL makes pub-\nlicly available10.\nThe following challenge for AFEL is to integrate data from a large number\nof sources into a shared platform, using the core data model to integrate and\nmake them processable. The approach taken is to create a \\data space\", which\nkeeps most of the data sources intact at the time of on-boarding and being inte-\ngrated at query time through a smart API, following the principles set out in [1].\nUsing this platform, the project has already created a number of tools, called\nextractors, which can extract data about user activities from several di\u000berent\nplatforms, creating a consistent and processable data space for each AFEL user\nwho can choose to enable some of those tools. At the time of writing those ex-\ntractors include browser extensions for extracting browsing history, applications\nfor Facebook and Twitter, as well as analytics extractors for the Didactalia por-\ntal from AFEL partner GNOSS.11We also integrate resource metadata from\nseveral open sources related to learning.\nBeyond data storage and integration, the key to enable extracting the fea-\ntures necessary to compute the kind of indicators mentioned in the previous sec-\ntion is to connect those datasets at a semantic level, i.e. to enrich the raw data\ninto a more complete \\Knowledge Graph\". In other words, connecting the di\u000ber-\nent entities with each other and extracting from unstructured or semi-structured\nsources entities of interest that can connect the data from a wide range of places.\nIn AFEL, we use entity linking approaches [5] as well as natural language pro-\n9http://data.afel-project.eu/catalogue/dataset/afel-core-data-model/\n10http://data.afel-project.eu/catalogue/learning-analytics-dataset-v1/\n11http://gnoss.com\n8 d'Aquin et al.\ncessing [11] and speci\fc feature extraction approaches to turn a user data space\ninto such a semantically enriched knowledge graph. Examples for such feature\nextraction approaches are computing the complexity of a resource [3], determin-\ning the semantic stability of a resource [22], or to assess in\ruencing factors in\nconsensus building processes in online collaboration scenarios [7].\nAdditionally, AFEL provides a methodology to determine the characteristic\nfeatures, which allow learning activities to be detected and described, and conse-\nquently the attributes that instantiate them, in di\u000berent data sources identi\fed\nwithin the project. This methodology facilitates an initial speci\fcation of the\nfeatures relevant to learning activities by presenting an instantiation of them on\nsome of key data sources. Furthermore, with our methodology, we also outline a\ntop-down perspective of feature engineering indicating that features identi\fed in\nAFEL are applicable in di\u000berent use cases, in general online contexts and that\nthey can be extracted from our data basis.\n4.2 An example: Learning scopes and topic-based indicator in\nbrowsing history\nIn this section, we present a short pilot experiment in which we implemented\nan initial version of showing indicators based on topics included in the learning\nactivities of a user (consistently with what described in Section 3). This relies\non some of the technical aspects described above, including data capture and\nenrichment.\nThe data: We use approximately 6 weeks of browsing history data for a user,\nobtained through the AFEL browser extension12, which pushes this information\nas the user is browsing the web. Each activity is described as an instance of the\nconcept BrowsingActivity in the AFEL Core Data Model, with as properties\nthe URL of the page accessed and the time at which it was accessed. In our\nillustrative example, this corresponds to 42 707 activities, making reference to\n12 738 URLs of webpages.\nTopic Extraction: The \frst step to extracting the learning scopes from the ac-\ntivity data is to extract the topics of each resource (webpage). For this, we \frst\nuse DBpedia Spotlight13to extract the entities referred to in the text in the\nform of Linked Data entities in the DBpedia dataset14. DBpedia is a Linked\nData version of Wikipedia, where each entity is described according to various\nproperties, including the categories in which the entity has been classi\fed in\nWikipedia. We therefore query DBpedia to obtain up to 20 categories from the\nones directly connected to the entities, or their broader categories in DBpedia's\ncategory taxonomy.\n12https://github.com/afel-project/browsing-history-webext\n13http://spotlight.dbpedia.org\n14http://dbpedia.org\nAFEL: Measuring Self-Directed Online Learning 9\nFor example, assume the learner views a YouTube video titled LMMS Tuto-\nrial | Getting VST Instruments .15When mining the extracted text (stripped\nof HTML markup), DBpedia Spotlight detects that the description of this video\nmentions entities such as <http://dbpedia.org/resource/LMMS> (dbp:LMMS\nfor short - a digital audio software suite) or dbp:Virtual Studio Technology .\nQuerying DBpedia reveals subject categories for dbp:LMMS , such as\n<http://dbpedia.org/resource/Category:Free audio editors> (or short,\ndbc:Free audio editors ) ordbc:Software drum machines . The detected cat-\negory dbc:Free audio editors is in turn declared in DBpedia to have broader\ncategories such as dbc:Audio editors ordbc:Free audio software . All of\nthese elements are included in the description of the activity that corresponds\nto watching the above video, to be used in the next step of clustering activities.\nOn our browsing history data, running the resources through DBpedia Spot-\nlight extracted 20 876 distinct entities, each being added 20 categories on average.\nTo give an idea of the scale, the \fnal description of the 6 weeks of activities of\nthis one learner takes approximately 1.1GB of space and took between 1 and 15\nseconds to compute for each activity (depending on the size of the original text,\nusing a modern laptop with a good internet connection).\nClustering activities: In the next step, we use the description of the activities\nas produced through the process described above in order to detect candidate\nlearning scopes, i.e. groups of topics and activities that seem to relate to the same\nbroader theme. To do this, we consider the set of entities and categories obtained\nbefore similarly to the text of documents and apply a common document clus-\ntering process on them (i.e. TFIDF vectorization and k-Means clustering). We\nobtain from this a set of kclusters (with kbeing a given) that group activities\nbased on the overlap they have in the topics (entities and categories) they cover.\nWe label each cluster based on the entity or category that best characterizes it\nin terms of F-Measure (i.e. that covers the maximum number of activities in the\ncluster, and the minimum number of activities outside the cluster), representing\nthe target of the topic scope.\nThe clustering technique we applied (k-Means) requires to \fx the number of\nclusters to be obtained in advance. We experimented with numbers between 6\nand 100, to see which could best represent the width and breadth of interests of\nthis particular learner. Here, we used 50 as it appeared to lead to good results (as\nfuture work, we will integrate ways to automatically discover the ideal number\nof clusters for a learner). Figure 2 shows the clusters obtained and their size.\nThe gray line describes all activities in the topic scope, i.e. all activities that\nhave been included in the cluster. As can be seen, the clusters are unbalanced\nbetween the ones with a large number of activities (Google, Web Programming)\nwith thousands of activities, and the ones representing only a few hundreds of\nactivities.\nTopic-based indicator: In the initial scenario we are considering here, we focus on\na topic-based indicator which consist in checking whether an activity introduces\n15https://www.youtube.com/watch?v=aZKra7rNspU\n10 d'Aquin et al.\nFig. 2. Topic scopes obtained from the learner's browsing activities. The gray line and\nleft axis indicate the size of the cluster in total number of activities. The black line and\nright axis only include activities detected as being learning activities.\nnew topics (entities or categories) into the learning scope (cluster) in which it\nis included. We therefore \\play back\" the sequence of browsing activities from\nthe learner's history, checking at each time how many new topics are being\nintroduced that were not present in the previous activities of the learner in this\nscope.\nLooking again at Figure 2, it is interesting to look at the di\u000berence between\nthe gray line (number of activities in the topic scope) and the black line, rep-\nresenting the number of activities that have integrated new topics in the scope\nand can therefore be considered learning activities. For example, since the user\nuses many Google services for basic tasks (such as Gmail for emails), it is not\nsurprising that the Google scope, while being the largest in activities, does not\nactually include much detected learning activities. What is obvious however is\nthat the balance is much di\u000berent for other clusters that can be clearly identi\fed\nfor including large amounts of learning activities.\nIndeed, we can see the value of the process here by comparing the learning\ntrajectories of the learner according to the de\fnition of contributions to di\u000berent\nlearning scopes considered. For example, the scope on Digital Technology, rep-\nresenting the largest number of learning activities, can be seen in Figure 3 (top)\nas a broad topic on which the learner is constantly (almost everyday) learning\nnew things. In contrast, the learning scope on Web Programming, although very\nrelated, is one where we can assume the learner already has some familiarity and\nonly makes a signi\fcant increment in their learning punctually, as can be seen\nby the jump around 08 September in Figure 3 (bottom).\nAFEL: Measuring Self-Directed Online Learning 11\nFig. 3. Trajectory in terms of the contribution (in number of topics) to the learning\nscope in Digital Technology (top) and Web Programming (bottom).\n5 Support Challenge: From Metrics to Actions\nThe current state in the implementation of the aforementioned aspects takes the\nform of a prototype learner dashboard, available from the Didactalia platform.\nThe dashboard illustrated in Figure 4 includes initial placeholder indicators for\nthe kind of frictions identi\fed in Section 3 and is implemented on the technolo-\ngies described above. It is however a preliminary result, showing the ability to\ntechnically integrate the di\u000berent AFEL components into an \frst product. It will\nbe further evolved in order to truly address the scenario of Section 2, including\nuser feedback and more accurate indicators.\nA key aspect to achieve the goal in our everyday learning scenario is that\nthe user should have control over what is being monitored. Indeed, the learner\nshould be able to decide what area of the data should be displayed, according\nto which indicator and which dimension of the data (e.g. speci\fc topics, times,\nresources or platforms). Our approach here is to rely on a framework for \rexible\ndashboards based on visualization recommendation, implemented through the\nVizRec tool [14]. At the root of VizRec lies a visualization engine that extracts\nthe basic features of the data and guiding the user in choosing appropriate ways\n12 d'Aquin et al.\nFig. 4. Screenshot of the prototype learner dashboard.\nto visualise them. Hereby, a learning expert may design a dashboard with an\ninitial view of set of learning indicators, but VizRec also empowers the user in\nchoosing what area of the data to show. This includes the ability to add new\ncharts to the dashboard that can be selected based on the characteristics of the\ndata (e.g. show a map for geo-graphical data). The tool can learn the user pref-\nerences, and therefore show a personalized dashboard which is always consistent\nwith the visualization choices made by the user. Figure 5 shows an example of\nVizRec displaying multidimensional learning data. A scatterplot correlates the\nnumber of previous attempts with studied credits, showing that the number of\nprevious attempts is smaller when studied credits is high. The grouped bar chart\ndisplays the number of previous attempts for female (right) and male (left) stu-\ndents, with genders being further subdivided by the highest level of education\n(encoded by color). It is obvious that education level has a very similar e\u000bect for\nboth females and males. Notice that in the VisPicker (shown on right) only some\nvisualizations are enabled, which is a direct consequence of the data dimensions\nwhich were chosen by the user: gender, highest education, number of previous\nattempts (shown on left). The user is free to choose only the enabled, mean-\ningful visualizations, with the optional possibility of the system recommending\nthe optimal representation based on previous user behavior. As the title of this\nAFEL: Measuring Self-Directed Online Learning 13\nsection calls, it is important to move from metrics to action and consider what\nthe learner should do, having seen her status.\nOne way to move the learner to action is via recommending learning resources\nthat appear to be relevant considering the current state of the learner [9]. Here,\nthe monitoring of learning activities has a direct bene\ft in supplying recom-\nmendations to the learner. The current implementation of such a recommender\nsystem is based on two well-known approaches: (i) Content-based \fltering, which\nrecommends similar resources based on the content of a given resource, and (ii)\nCollaborative \fltering, which recommends resources of similar users based on\nthe learning activities of a given user [18].\nFig. 5. Example use of the VizRec tool for personalized dashboards.\nHowever, an important aspect, which is still missing, is how such measures\nof similarity can be based on metrics that are relevant to learning rather than\non basic content or pro\fle similarity. Indeed, the objective here would be to rec-\nommend learning resources (or even learning resource paths) that have already\nbeen helpful for other users with a similar learning goal and a similar learning\nstate (in terms of the concepts, complexities and views already encountered). In\nother words, the recommendations can be based on a meaningful view of what\nthe suggested resources might contribute to learning.\n6 Discussion: Towards Wide-Availability, Ethical Tools\nfor Self-Tracking of Online Learning\nIn the previous section, we discussed how to theoretically and technically im-\nplement tools for self-awareness targeted at self-directed online learning. Those\ntools are currently at early stages of development. Beyond those aspects however,\nother challenges will be faced by the AFEL consortium. One of them includes\nfacilitating the adoption of these tools by a wide variety of users. Indeed, the\nactual usefulness and value of such personal analytics dashboards and learning\n14 d'Aquin et al.\nassistant technologies have not been formally assessed and the participation of\nthe learner community in their development is necessary in order to ensure that\nthey reach their potential. The approach taken by AFEL here is to start with\nthe community of learners in the Didactalia platform, enabling the dashboard\nfor them and through that, supporting them in integrating data from other plat-\nforms. With a large number of users, we will be able to collect enough data to\nunderstand how such monitoring can truly support users in reaching awareness of\ntheir learning behavior, and how this can help them take decisions with respect\nto their own learning.\nAnother aspect which is not discussed in this paper is the ethical implications\nof realizing such tools and reaching a wide-adoption. As mentioned above, each\nof the learners is assigned their own data space on the AFEL platform, which\nis only accessible by them. However, as mentioned in the scenario of Section 2,\nsupport to the learner might be better achieved by enabling them to compare\ntheir own behavior with others, and we aim to make some aggregated data\navailable to others for research purposes. Proper anonymisation techniques need\nto be applied in order to ensure that external parties cannot infer information\nabout speci\fc learners from having access to those tools and data.\nBeyond privacy however, it is also important to ensure that the e\u000bect of the\ntool might not turn out to be negative. Existing work have shown a number\nof ethical harms that might come out of enabling self-governance in a number\nof domains, despite the obvious positive e\u000bects [24]. Those include introducing\nbiases towards common learning behaviors or pushing learners towards excessive\nbehaviors for the purpose of improving the values of indicators that are necessar-\nily only approximate representations of learning. Activities within and connected\nto the AFEL project have for speci\fc objective to tackle those aspects, through\nestablishing contrasting scenarios of the possible e\u000bect of self-tracking tools as\na basis to engage with users of those tools about the ways to avoid the negative\ne\u000bects while keeping the positive ones.\nAcknowledgement\nThis work has received funding from the European Union's Horizon 2020 re-\nsearch and innovation programme as part of the AFEL (Analytics for Everyday\nLearning) project under grant agreement No 687916.\nReferences\n1. A. Adamou and M. dAquin. On requirements for federated data integration as a\ncompilation process. In Proceedings of the PROFILES 2015 workshop , 2015.\n2. U. Cress and J. Kimmerle. A systemic and cognitive view on collaborative knowl-\nedge building with wikis. International Journal of Computer-Supported Collabora-\ntive Learning , 2(3), 2008.\n3. S. A. Crossley, J. Green\feld, and D. S. McNamara. Assessing text readability\nusing cognitively based indices. Tesol Quarterly , 42(3):475{493, 2008.\n4. M. De Laat and F. R. Prinsen. Social learning analytics: Navigating the changing\nsettings of higher education. Research & Practice in Assessment , 9, 2014.\nAFEL: Measuring Self-Directed Online Learning 15\n5. S. Dietze, S. Sanchez-Alonso, H. Ebner, H. Qing Yu, D. Giordano, I. Marenzi, and\nB. Pereira Nunes. Interlinking educational resources and the web of data: A survey\nof challenges and approaches. Program , 47(1):60{91, 2013.\n6. R. Ferguson. Learning analytics: drivers, developments and challenges. Interna-\ntional Journal of Technology Enhanced Learning , 4(5-6):304{317, 2012.\n7. I. Hasani-Mavriqi, F. Geigl, S. C. Pujari, E. Lex, and D. Helic. The in\ruence of so-\ncial status and network structure on consensus building in collaboration networks.\nSocial Network Analysis and Mining , 6(1):80, 2016.\n8. J. Kimmerle, J. Moskaliuk, A. Oeberst, and U. Cress. Learning and collective\nknowledge construction with social media: A process-oriented perspective. Educa-\ntional Psychologist , (50), 2015.\n9. S. Kopeinik, E. Lex, P. Seitlinger, D. Albert, and T. Ley. Supporting collabora-\ntive learning with tag recommendations: a real-world study in an inquiry-based\nclassroom project. In LAK , pages 409{418, 2017.\n10. J. Lave and E. Wenger. Situated learning: Legitimate peripheral participation .\nCambridge University Press, 1991.\n11. C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. Mc-\nClosky. The stanford corenlp natural language processing toolkit. In ACL (System\nDemonstrations) , pages 55{60, 2014.\n12. V. Marsick and K. Watkins. Lessons from informal and incidental learning. Man-\nagement learning: Integrating perspectives in theory and practice , 1997.\n13. C. McLoughlin and M. Lee. Personalised and self regulated learning in the web\n2.0 era: International exemplars of innovative pedagogy using social software. Aus-\ntralasian Journal of Educational Technology , 1(26), 2010.\n14. B. Mutlu, E. Veas, and C. Trattner. Vizrec: Recommending personalized visual-\nizations. ACM Trans. Interact. Intell. Syst. , 6(4):31:1{31:39, Nov. 2016.\n15. G. Ne\u000b and D. Nafus. The Self-Tracking . MIT Press, 2016.\n16. A. Oeberst, J. Kimmerle, and U. Cress. What is knowledge? who creates it? who\npossesses it? the need for novel answers to old questions. Mass collaboration and\neducation , 2016.\n17. S. Paavola, L. Lipponen, and K. Hakkarainen. Models of innovative knowledge\ncommunities and three metaphors of learning. Review of Educational Research ,\n4(74), 2004.\n18. P. Seitlinger, D. Kowald, S. Kopeinik, I. Hasani-Mavriqi, T. Ley, and E. Lex. At-\ntention please! a hybrid resource recommender mimicking attention-interpretation\ndynamics. In Proceedings of the 24th International Conference on World Wide\nWeb, WWW '15 Companion, pages 339{345, New York, NY, USA, 2015. ACM.\n19. A. Sfard. On two metaphors for learning and the dangers of choosing just one.\nEducational Researcher , 2(27), 1998.\n20. S. B. Shum and R. Ferguson. Social learning analytics. Journal of educational\ntechnology & society , 15(3):3, 2012.\n21. M. Simon. Reconstructing mathematics pedagogy from a constructivist perspec-\ntive. Journal for Research in Mathematics Education , 2(26), 1995.\n22. D. Stanisavljevic, I. Hasani-Mavriqi, E. Lex, M. Strohmaier, and D. Helic. Semantic\nstability in wikipedia. In International Workshop on Complex Networks and their\nApplications , pages 379{390. Springer, 2016.\n23. H. Stubb\u0013 e and N. Theunissen. Self-directed adult learning in a ubiquitous learning\nenvironment: A meta-review. In Proceedings of the First Workshop on Technology\nSupport for Self-Organized Learners , 2008.\n24. J. R. Whitson. Foucaults \ftbit: Governance and gami\fcation. The Gameful World\n- Approaches, Issues, Applications , 2014.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "z_xscqz4NX5", "year": null, "venue": "ARTEL@EC-TEL 2017", "pdf_link": "http://ceur-ws.org/Vol-1997/paper5.pdf", "forum_link": "https://openreview.net/forum?id=z_xscqz4NX5", "arxiv_id": null, "doi": null }
{ "title": "AFEL: Towards Measuring Online Activities Contributions to Self-directed Learning", "authors": [ "Mathieu d'Aquin", "Alessandro Adamou", "Stefan Dietze", "Besnik Fetahu", "Ujwal Gadiraju", "Ilire Hasani-Mavriqi", "Peter Holtz", "Joachim Kimmerle", "Dominik Kowald", "Elisabeth Lex", "Susana López-Sola", "Ricardo Alonso Maturana", "Vedran Sabol", "Pinelopi Troullinou", "Eduardo E. Veas" ], "abstract": null, "keywords": [], "raw_extracted_content": "AFEL: Towards Measuring Online Activities\nContributions to Self-Directed Learning\nMathieu d'Aquin1, Alessandro Adamou2, Stefan Dietze3, Besnik Fetahu3,\nUjwal Gadiraju3, Ilire Hasani-Mavriqi4, Peter Holtz5, Joachim Kimmerle5,\nDominik Kowald4, Elisabeth Lex4, Susana L\u0013 opez Sola6], Ricardo A.\nMaturana6, Vedran Sabol4, Pinelopi Troullinou2, and Eduardo Veas4\n1Insight Centre for Data Analytics, National University of Ireland, Galway\[email protected]\n2Knowledge Media Institute, The Open University, UK\nfalessandro.adamou,pinelopi.troullinou [email protected]\n3L3S Research Center, Leibniz University Hanover, Germany\nfdietze,fetahu,gadiraju [email protected]\n4Know-Center Graz, University of Technology Graz, Austria\nfihasani,dkowald,vsabol,eduveas [email protected],[email protected]\n5The Leibniz-Insititut f ur Wissensmedien, T ubingen, Germany\nfp.holtz,j.kimmerle [email protected] ,\n6GNOSS, Spain\nfsusanalopez,riam [email protected]\nAbstract. More and more learning activities take place online in a self-\ndirected manner. Therefore, just as the idea of self-tracking activities\nfor \ftness purposes has gained momentum in the past few years, tools\nand methods for awareness and self-re\rection on one's own online learn-\ning behavior appear as an emerging need for both formal and informal\nlearners. Addressing this need is one of the key objectives of the AFEL\n(Analytics for Everyday Learning) project. In this paper, we discuss the\ndi\u000berent aspects of what needs to be put in place in order to enable\nawareness and self-re\rection in online learning. We start by describing a\nscenario that guides the work done. We then investigate the theoretical,\ntechnical and support aspects that are required to enable this scenario,\nas well as the current state of the research in each aspect within the\nAFEL project. We conclude with a discussion of the ongoing plans from\nthe project to develop learner-facing tools that enable awareness and self-\nre\rection for online, self-directed learners. We also elucidate the need to\nestablish further research programs on facets of self-tracking for learn-\ning that are necessarily going to emerge in the near future, especially\nregarding privacy and ethics.\n1 Introduction\nMuch of the research on measuring learners' online activities, and to some ex-\ntent much of the research work in Technology-Enhanced Learning, focus on\nthe restricted scenario of students formally engaged in learning (e.g. enrolled in\n2 d'Aquin et al.\na university program) and where online activities happen through a provided\neLearning system. However, whether or not they are formally engaged in learn-\ning, more and more learners are now using a large variety of online platforms\nand resources which are not necessarily connected with their learning environ-\nment or with each other. Such use of online resources tends to be self-directed\nin the sense that learners make their own choices as to which resource to employ\nand which activity to realize amongst the wide choice o\u000bered to them (MOOCs,\ntutorials, open educational resources, etc). With such practices becoming more\ncommon, there is therefore value in researching the way in which to support such\nchoices.\nIn several other areas than learning where self-directed activities are promi-\nnent (e.g. \ftness), there has been a trend in recent years following the techno-\nlogical development of tools for self-tracking [15]. Those tools quantify a speci\fc\nuser's activities with respect to a certain goal (e.g. being physically \ft) to enable\nself-awareness and re\rection, with the purpose of turning them into behavioral\nchanges. While the actual bene\fts of self-tracking in those areas are still debat-\nable, our understanding of how such approaches could bene\ft learning behaviors\nas they become more self-directed remains very limited.\nAFEL7(Analytics for Everyday Learning) is an European Horizon 2020\nproject which aim is to address both the theoretical and technological chal-\nlenges arising from applying learning analytics [6] in the context of online, social\nlearning. The pillars of the project are the technologies to capture large scale,\nheterogeneous data about learner's online activities across multiple platforms\n(including social media) and the operationalization of theoretical cognitive mod-\nels of learning to measure and assess those online learning activities. One of the\nkey planned outcomes of the project is therefore a set of tools enabling self-\ntracking on online learning by a wide range of potential learners to enable them\nto re\rect and ultimately improve the way they focus their learning.\nIn this paper, we discuss the research and development challenges associ-\nated with achieving those goals and describe initial results obtained by the\nproject in three key areas: theory (through cognitive models of learning), tech-\nnology (through data capture, processing and enrichment systems) and support\n(through the features provided to users for visualizing, exploring and drawing\nconclusions from their learning activities). We start by describing a motivating\nscenario of an online, self-directed learner to clarify our objective.\n2 Motivating Scenario\nBelow is a speci\fc scenario considering a learner not formally engaged in a\nspeci\fc study program, but who is, in a self-directed and explicit way, engaged\nin online learning. The objective is to describe in a simple way how the envisioned\nAFEL tools could be used for self-awareness and re\rection, but also to explore\nwhat the expected bene\fts of enabling this for users/learners are:\n7http://afel-project.eu\nAFEL: Measuring Self-Directed Online Learning 3\nJane is 37 and works as an administrative assistant in a local medium-\nsized company. As hobbies, she enjoys sewing and cycling in the local\nforests. She is also interested in business management, and is consider-\ning either developing in her current job to a more senior level or making\na career change. Jane spends a lot of time online at home and at her\njob. She has friends on Facebook with whom she shares and discusses\nlocal places to go cycling, and others with whom she discusses sewing\ntechniques and possible projects, often through sharing YouTube videos.\nJane also follows MOOCs and forums related to business management,\non di\u000berent topics. She often uses online resources such as Wikipedia\nand online magazines. At school, she was not very interested in maths,\nwhich is needed if she wants to progress in her job. She is therefore regis-\ntered on Didactalia8, connecting to resources and communities on maths,\nespecially statistics.\nJane has decided to take her learning seriously: She has registered to\nuse the AFEL dashboard through the Didactalia interface. She has also\ninstalled the AFEL browser extension to include her browsing history,\nas well as the Facebook app. She has not included in her dashboard her\nemails, as they are mostly related to her current job, or Twitter, since\nshe rarely uses it.\nJane looks at the dashboard more or less once a day, as she is prompted by\na noti\fcation from the AFEL smart phone application or from the Face-\nbook app, to see how she has been doing the previous day in her online\nsocial learning. It might for example say \\It looks like you progressed well\nwith sewing yesterday! See how you are doing on other topics...\" Jane,\nas she looks at the dashboard, realizes that she has been focusing a lot on\nher hobbies and procrastinated on the topics she enjoys less, especially\nstatistics. Looking speci\fcally at statistics, she realizes that she almost\nonly works on it on Friday evenings, because she feels guilty of not hav-\ning done much during the week. She also sees that she is not putting\nas much e\u000bort into her learning of statistics as other learners, and not\nmaking as much progress. She therefore makes a conscious decision to\nput more focus on it. She adds new goals on the dashboard of the form\n\\Work on statistics during my lunch break every week day\" or \\Have\nachieved a 10% progress compared to now by the same time next week\".\nThe dashboard will remind her of how she is doing against those goals as\nshe goes about her usual online social learning activities. She also gets\nrecommendations of things to do on Didactalia and Facebook based on\nthe indicators shown on the dashboard and her stated goals.\nWhile this is obviously a \fctitious scenario, which is very much simpli\fed, it\nshows the way tools for awareness and self-re\rection can support online self-\ndirected learning, and it provides a basis to investigate the challenges to address\nin order to enable the development of tools of the kind that are described, as\ndiscussed in the rest of this paper.\n8http://didactalia.net\n4 d'Aquin et al.\n3 Theoretical Challenge: Measuring Self-Directed\nLearning\nOne result of the advent of the Internet as a mass phenomenon was a slight\nchange in our understanding of constructs such as \\knowledge\" and \\learning\".\nIn such contexts as described above, it is by no means a trivial task to identify\nand to assess learning. Indeed, in order to understand how learning emerges from\na collection of disparate online activities, we need to get back to fundamental,\ncognitive models of learning, as we cannot make the assumption that usual ways\nto test the results of learning are available.\nTraditionally, the acquisition metaphor was frequently used to describe learn-\ning processes [19]: From this perspective, learning consists in the accumulation of\n\\basic units of knowledge\" within the \\container\" (p. 5) of the human mind. Al-\nready before the digital age, there was also an alternative, more socially oriented\nunderstanding of learning, which is endowed in the participation metaphor: Here,\nknowing is equaled to taking up and getting used to the customs and habits of\na community of practice [10], into which a learner is socialized. Over the last\ntwo decades however, the knowledge construction metaphor has emerged [17]\nas a third important metaphor of learning. Building upon a constructivist un-\nderstanding of learning, the focus lies here on the constant creation and re-\ncreation of knowledge within knowledge construction communities. Knowledge\nis no longer thought of as a rather static entity in form of a \\justi\fed true be-\nlief\"; instead, knowledge is constantly re-negotiated and evolves in a dynamic\nway [16]. In this tradition, the co-evolution model of learning and knowledge\nconstruction [2] treats learning on the side of individuals and knowledge con-\nstruction on the side of communities as two structurally coupled processes (see\nFigure 1). Irritations of a learner's cognitive system in form of new or unex-\npected information that has to be integrated into existing cognitive structures\ncan lead to learning processes in the form of changes in the learner's cogni-\ntive schemas, behavioral scripts, and other cognitive structures. In turn, such\nlearning processes may trigger communication acts by learners within knowl-\nedge construction communities and stimulate further communication processes\nthat lead to the construction of new knowledge. In this model, shared artifacts,\nfor example in form of digital texts such as contributions to wikis or social me-\ndia messages, mediate between the two coupled systems of individual minds and\ncommunicating communities [8].\nWhen talking about learning in digital environments, we can consequently\nde\fne learning as the activity of learners encountering at least partly new infor-\nmation in form of digital artifacts. In principle, every single interaction between\na learner and an artifact can entail learning processes. Learning can either hap-\npen occasionally and accidentally or in the course of planned and at least partly\nstructured learning activities [12]. Planned and structured learning activities can\neither be self-organized or follow to a certain degree a pre-de\fned curriculum of\nlearning activities [13]. In both cases, the related activities will constitute a cer-\ntain learning trajectory [21] which comprises of \\the learning goal, the learning\nactivities, and the thinking and learning in which the students might engage\"\nAFEL: Measuring Self-Directed Online Learning 5\nFig. 1. The dynamic processes of learning and knowledge construction [8] (p. 128).\n(p. 133). Successful learning will result in increases in the learner's abilities and\ncompetencies; for example, successful learners will be able to solve increasingly\ndi\u000ecult tasks or to process increasingly complex learning materials [23].\nBased on these theoretical considerations, the challenge in building tools\nfor self-tracking of online, self-directed learning is to recognize to what extent\nencountering and processing a certain artifact (a resource) induced learning. In\nthe co-evolution model, we assume that what we can measure is the friction (or\nirritation) which triggers internalization processes, i.e. what does the artifact\nbring to the cognitive system that leads to its evolution. At the moment, we\ndistinguish three forms of \\frictions\", leading to three categories of indicators of\nlearning:\n{New concepts and topics: The simplest way in which we can think about\nhow an artifact could lead to learning is through its introduction of new\nknowledge unknown to the learner. This is consistent with the traditional\nacquisition metaphor. In our scenario, this kind of friction happens for exam-\nple when Jane watches a video about a sewing technique previously unknown\nto her.\n{Increased complexity: While not necessarily introducing new concepts, an\nartifact might relate to known concepts in a more complex way, where com-\nplexity might relate to the granularity, speci\fcity or interrelatedness with\nwhich those concepts are treated in the artifact. In a social system, the\nassumption of the co-evolution model is that the interaction between indi-\nviduals might enable such increases in understanding of the concepts being\nconsidered through iteratively re\fning them. In our scenario, this kind of\nfriction happens for example when Jane follows a statistics course which is\nmore advanced than the ones she had encountered before.\n6 d'Aquin et al.\n{New views and opinions: Similarly, known concepts might be introduced \\in\na di\u000berent light\", through varying points of views and opinions enabling a\nre\fnement of the understanding of the concepts treated. This is consistent\nwith the co-evolution model in the sense that it can be seen either as a widen-\ning of the social system in which the learner is involved, or as the integration\ninto di\u000berent social systems. In our scenario, this kind of friction happens\nfor example when Jane reads a critical review of a business management\nmethodology she has been studying.\nWhat appears evident from confronting the co-evolution model and the types\nof indicators described above with the scenario of the previous section is that\nsuch indicators and models should be considered within distinct \\domains\" of\nlearning. Indeed, Jane in the scenario would relate to di\u000berent social systems for\nexample for her interest in sewing, cycling, business management and statistics.\nThe concepts that are relevant, the levels of complexity to consider and the views\nthat can be expressed are also di\u000berent from each other in those domains.\nWe call those domains of learning learning scopes . In the remainder of this\npaper, we will therefore consider a learning scope to be an area or theme of\ninterest to a learner (sewing, business, etc.) to which are attached (consciously\nor not) speci\fc learning goals, as well as a speci\fc set of concepts, topics and\nactivities.\n4 Technical Challenge: Making-Sense of Masses of\nHeterogeneous Activity Data\nConsidering the conclusions from the previous section, the key challenge at the\nintersection of theory and technology for self-tracking of online, self-directed\nlearning is to devise ways to compute the kind of indicators that are useful\nto identify and approximate some quanti\fcation of the three types of frictions\nwithin (implicit/emerging) learning scopes. Before that, however, we have to\nface more basic technical challenges to set in place the mechanisms to collect,\nintegrate, enrich and process the data necessary to compute those indicators.\n4.1 Data capture, integration and enrichment\nThe AFEL project aims at identifying the features that characterize learning\nactivities within online contexts across multiple platforms. With that, we con-\ntribute to the \feld of Social Learning Analytics that is based on the idea that\nnew ideas and skills are not only individual achievements, but also the results of\ninteraction and collaboration [20]. With the rise of the Social Web, online social\nlearning has been facilitated due to the participatory and collaborative nature\nof the Social Web. This has posed several challenges for Learning Analytics: The\n(online) environments where learning activities and related features are to be\ndetected are largely heterogeneous and tend to generate enormous amounts of\ndata concerning user activities that may or may not relate to learning, and even\nAFEL: Measuring Self-Directed Online Learning 7\nwhen they do, the relation is not guaranteed to be explicit. A key issue is that,\neven with an emerging theoretical model, there is no established model for repre-\nsenting the data for learning that can span across all the types of activities that\nmight occur in online environments. With respect to data capture, it may be\nhard to track all relevant learning traces and some indicators such as readership\ndata may be misleading due to switches between the online and o\u000fine world [4].\nTherefore, AFEL adopted an approach to identify reliable data sources and\nto structure their capture process, which is based on an e\u000bort to classify data\nsources, rather than the data themselves. Such an exercise in classi\fcation is\nimportant as it is the result of an e\u000bort to understand what dimensions of the\nactivities through the Web should be captured, before setting out to detect spe-\nci\fc learning activity factors. The resulting taxonomy revolves around a core of\nseven types of entities that a candidate data source has a potential for describ-\ning; these are further speci\fed into sub-categories that capture a speci\fc set of\ndimensions, some of which are common to users and communities (e.g. learning\nstatements), or to users (e.g. indicators of expertise) and learning resources (e.g.\nindicators of popularity). Those categories are at the core of the proposed AFEL\nCore Data Model9, an RDF vocabulary largely based on schema.org and which\nis, amongst other things, used to aggregate the datasets that AFEL makes pub-\nlicly available10.\nThe following challenge for AFEL is to integrate data from a large number\nof sources into a shared platform, using the core data model to integrate and\nmake them processable. The approach taken is to create a \\data space\", which\nkeeps most of the data sources intact at the time of on-boarding and being inte-\ngrated at query time through a smart API, following the principles set out in [1].\nUsing this platform, the project has already created a number of tools, called\nextractors, which can extract data about user activities from several di\u000berent\nplatforms, creating a consistent and processable data space for each AFEL user\nwho can choose to enable some of those tools. At the time of writing those ex-\ntractors include browser extensions for extracting browsing history, applications\nfor Facebook and Twitter, as well as analytics extractors for the Didactalia por-\ntal from AFEL partner GNOSS.11We also integrate resource metadata from\nseveral open sources related to learning.\nBeyond data storage and integration, the key to enable extracting the fea-\ntures necessary to compute the kind of indicators mentioned in the previous sec-\ntion is to connect those datasets at a semantic level, i.e. to enrich the raw data\ninto a more complete \\Knowledge Graph\". In other words, connecting the di\u000ber-\nent entities with each other and extracting from unstructured or semi-structured\nsources entities of interest that can connect the data from a wide range of places.\nIn AFEL, we use entity linking approaches [5] as well as natural language pro-\n9http://data.afel-project.eu/catalogue/dataset/afel-core-data-model/\n10http://data.afel-project.eu/catalogue/learning-analytics-dataset-v1/\n11http://gnoss.com\n8 d'Aquin et al.\ncessing [11] and speci\fc feature extraction approaches to turn a user data space\ninto such a semantically enriched knowledge graph. Examples for such feature\nextraction approaches are computing the complexity of a resource [3], determin-\ning the semantic stability of a resource [22], or to assess in\ruencing factors in\nconsensus building processes in online collaboration scenarios [7].\nAdditionally, AFEL provides a methodology to determine the characteristic\nfeatures, which allow learning activities to be detected and described, and conse-\nquently the attributes that instantiate them, in di\u000berent data sources identi\fed\nwithin the project. This methodology facilitates an initial speci\fcation of the\nfeatures relevant to learning activities by presenting an instantiation of them on\nsome of key data sources. Furthermore, with our methodology, we also outline a\ntop-down perspective of feature engineering indicating that features identi\fed in\nAFEL are applicable in di\u000berent use cases, in general online contexts and that\nthey can be extracted from our data basis.\n4.2 An example: Learning scopes and topic-based indicator in\nbrowsing history\nIn this section, we present a short pilot experiment in which we implemented\nan initial version of showing indicators based on topics included in the learning\nactivities of a user (consistently with what described in Section 3). This relies\non some of the technical aspects described above, including data capture and\nenrichment.\nThe data: We use approximately 6 weeks of browsing history data for a user,\nobtained through the AFEL browser extension12, which pushes this information\nas the user is browsing the web. Each activity is described as an instance of the\nconcept BrowsingActivity in the AFEL Core Data Model, with as properties\nthe URL of the page accessed and the time at which it was accessed. In our\nillustrative example, this corresponds to 42 707 activities, making reference to\n12 738 URLs of webpages.\nTopic Extraction: The \frst step to extracting the learning scopes from the ac-\ntivity data is to extract the topics of each resource (webpage). For this, we \frst\nuse DBpedia Spotlight13to extract the entities referred to in the text in the\nform of Linked Data entities in the DBpedia dataset14. DBpedia is a Linked\nData version of Wikipedia, where each entity is described according to various\nproperties, including the categories in which the entity has been classi\fed in\nWikipedia. We therefore query DBpedia to obtain up to 20 categories from the\nones directly connected to the entities, or their broader categories in DBpedia's\ncategory taxonomy.\n12https://github.com/afel-project/browsing-history-webext\n13http://spotlight.dbpedia.org\n14http://dbpedia.org\nAFEL: Measuring Self-Directed Online Learning 9\nFor example, assume the learner views a YouTube video titled LMMS Tuto-\nrial | Getting VST Instruments .15When mining the extracted text (stripped\nof HTML markup), DBpedia Spotlight detects that the description of this video\nmentions entities such as <http://dbpedia.org/resource/LMMS> (dbp:LMMS\nfor short - a digital audio software suite) or dbp:Virtual Studio Technology .\nQuerying DBpedia reveals subject categories for dbp:LMMS , such as\n<http://dbpedia.org/resource/Category:Free audio editors> (or short,\ndbc:Free audio editors ) ordbc:Software drum machines . The detected cat-\negory dbc:Free audio editors is in turn declared in DBpedia to have broader\ncategories such as dbc:Audio editors ordbc:Free audio software . All of\nthese elements are included in the description of the activity that corresponds\nto watching the above video, to be used in the next step of clustering activities.\nOn our browsing history data, running the resources through DBpedia Spot-\nlight extracted 20 876 distinct entities, each being added 20 categories on average.\nTo give an idea of the scale, the \fnal description of the 6 weeks of activities of\nthis one learner takes approximately 1.1GB of space and took between 1 and 15\nseconds to compute for each activity (depending on the size of the original text,\nusing a modern laptop with a good internet connection).\nClustering activities: In the next step, we use the description of the activities\nas produced through the process described above in order to detect candidate\nlearning scopes, i.e. groups of topics and activities that seem to relate to the same\nbroader theme. To do this, we consider the set of entities and categories obtained\nbefore similarly to the text of documents and apply a common document clus-\ntering process on them (i.e. TFIDF vectorization and k-Means clustering). We\nobtain from this a set of kclusters (with kbeing a given) that group activities\nbased on the overlap they have in the topics (entities and categories) they cover.\nWe label each cluster based on the entity or category that best characterizes it\nin terms of F-Measure (i.e. that covers the maximum number of activities in the\ncluster, and the minimum number of activities outside the cluster), representing\nthe target of the topic scope.\nThe clustering technique we applied (k-Means) requires to \fx the number of\nclusters to be obtained in advance. We experimented with numbers between 6\nand 100, to see which could best represent the width and breadth of interests of\nthis particular learner. Here, we used 50 as it appeared to lead to good results (as\nfuture work, we will integrate ways to automatically discover the ideal number\nof clusters for a learner). Figure 2 shows the clusters obtained and their size.\nThe gray line describes all activities in the topic scope, i.e. all activities that\nhave been included in the cluster. As can be seen, the clusters are unbalanced\nbetween the ones with a large number of activities (Google, Web Programming)\nwith thousands of activities, and the ones representing only a few hundreds of\nactivities.\nTopic-based indicator: In the initial scenario we are considering here, we focus on\na topic-based indicator which consist in checking whether an activity introduces\n15https://www.youtube.com/watch?v=aZKra7rNspU\n10 d'Aquin et al.\nFig. 2. Topic scopes obtained from the learner's browsing activities. The gray line and\nleft axis indicate the size of the cluster in total number of activities. The black line and\nright axis only include activities detected as being learning activities.\nnew topics (entities or categories) into the learning scope (cluster) in which it\nis included. We therefore \\play back\" the sequence of browsing activities from\nthe learner's history, checking at each time how many new topics are being\nintroduced that were not present in the previous activities of the learner in this\nscope.\nLooking again at Figure 2, it is interesting to look at the di\u000berence between\nthe gray line (number of activities in the topic scope) and the black line, rep-\nresenting the number of activities that have integrated new topics in the scope\nand can therefore be considered learning activities. For example, since the user\nuses many Google services for basic tasks (such as Gmail for emails), it is not\nsurprising that the Google scope, while being the largest in activities, does not\nactually include much detected learning activities. What is obvious however is\nthat the balance is much di\u000berent for other clusters that can be clearly identi\fed\nfor including large amounts of learning activities.\nIndeed, we can see the value of the process here by comparing the learning\ntrajectories of the learner according to the de\fnition of contributions to di\u000berent\nlearning scopes considered. For example, the scope on Digital Technology, rep-\nresenting the largest number of learning activities, can be seen in Figure 3 (top)\nas a broad topic on which the learner is constantly (almost everyday) learning\nnew things. In contrast, the learning scope on Web Programming, although very\nrelated, is one where we can assume the learner already has some familiarity and\nonly makes a signi\fcant increment in their learning punctually, as can be seen\nby the jump around 08 September in Figure 3 (bottom).\nAFEL: Measuring Self-Directed Online Learning 11\nFig. 3. Trajectory in terms of the contribution (in number of topics) to the learning\nscope in Digital Technology (top) and Web Programming (bottom).\n5 Support Challenge: From Metrics to Actions\nThe current state in the implementation of the aforementioned aspects takes the\nform of a prototype learner dashboard, available from the Didactalia platform.\nThe dashboard illustrated in Figure 4 includes initial placeholder indicators for\nthe kind of frictions identi\fed in Section 3 and is implemented on the technolo-\ngies described above. It is however a preliminary result, showing the ability to\ntechnically integrate the di\u000berent AFEL components into an \frst product. It will\nbe further evolved in order to truly address the scenario of Section 2, including\nuser feedback and more accurate indicators.\nA key aspect to achieve the goal in our everyday learning scenario is that\nthe user should have control over what is being monitored. Indeed, the learner\nshould be able to decide what area of the data should be displayed, according\nto which indicator and which dimension of the data (e.g. speci\fc topics, times,\nresources or platforms). Our approach here is to rely on a framework for \rexible\ndashboards based on visualization recommendation, implemented through the\nVizRec tool [14]. At the root of VizRec lies a visualization engine that extracts\nthe basic features of the data and guiding the user in choosing appropriate ways\n12 d'Aquin et al.\nFig. 4. Screenshot of the prototype learner dashboard.\nto visualise them. Hereby, a learning expert may design a dashboard with an\ninitial view of set of learning indicators, but VizRec also empowers the user in\nchoosing what area of the data to show. This includes the ability to add new\ncharts to the dashboard that can be selected based on the characteristics of the\ndata (e.g. show a map for geo-graphical data). The tool can learn the user pref-\nerences, and therefore show a personalized dashboard which is always consistent\nwith the visualization choices made by the user. Figure 5 shows an example of\nVizRec displaying multidimensional learning data. A scatterplot correlates the\nnumber of previous attempts with studied credits, showing that the number of\nprevious attempts is smaller when studied credits is high. The grouped bar chart\ndisplays the number of previous attempts for female (right) and male (left) stu-\ndents, with genders being further subdivided by the highest level of education\n(encoded by color). It is obvious that education level has a very similar e\u000bect for\nboth females and males. Notice that in the VisPicker (shown on right) only some\nvisualizations are enabled, which is a direct consequence of the data dimensions\nwhich were chosen by the user: gender, highest education, number of previous\nattempts (shown on left). The user is free to choose only the enabled, mean-\ningful visualizations, with the optional possibility of the system recommending\nthe optimal representation based on previous user behavior. As the title of this\nAFEL: Measuring Self-Directed Online Learning 13\nsection calls, it is important to move from metrics to action and consider what\nthe learner should do, having seen her status.\nOne way to move the learner to action is via recommending learning resources\nthat appear to be relevant considering the current state of the learner [9]. Here,\nthe monitoring of learning activities has a direct bene\ft in supplying recom-\nmendations to the learner. The current implementation of such a recommender\nsystem is based on two well-known approaches: (i) Content-based \fltering, which\nrecommends similar resources based on the content of a given resource, and (ii)\nCollaborative \fltering, which recommends resources of similar users based on\nthe learning activities of a given user [18].\nFig. 5. Example use of the VizRec tool for personalized dashboards.\nHowever, an important aspect, which is still missing, is how such measures\nof similarity can be based on metrics that are relevant to learning rather than\non basic content or pro\fle similarity. Indeed, the objective here would be to rec-\nommend learning resources (or even learning resource paths) that have already\nbeen helpful for other users with a similar learning goal and a similar learning\nstate (in terms of the concepts, complexities and views already encountered). In\nother words, the recommendations can be based on a meaningful view of what\nthe suggested resources might contribute to learning.\n6 Discussion: Towards Wide-Availability, Ethical Tools\nfor Self-Tracking of Online Learning\nIn the previous section, we discussed how to theoretically and technically im-\nplement tools for self-awareness targeted at self-directed online learning. Those\ntools are currently at early stages of development. Beyond those aspects however,\nother challenges will be faced by the AFEL consortium. One of them includes\nfacilitating the adoption of these tools by a wide variety of users. Indeed, the\nactual usefulness and value of such personal analytics dashboards and learning\n14 d'Aquin et al.\nassistant technologies have not been formally assessed and the participation of\nthe learner community in their development is necessary in order to ensure that\nthey reach their potential. The approach taken by AFEL here is to start with\nthe community of learners in the Didactalia platform, enabling the dashboard\nfor them and through that, supporting them in integrating data from other plat-\nforms. With a large number of users, we will be able to collect enough data to\nunderstand how such monitoring can truly support users in reaching awareness of\ntheir learning behavior, and how this can help them take decisions with respect\nto their own learning.\nAnother aspect which is not discussed in this paper is the ethical implications\nof realizing such tools and reaching a wide-adoption. As mentioned above, each\nof the learners is assigned their own data space on the AFEL platform, which\nis only accessible by them. However, as mentioned in the scenario of Section 2,\nsupport to the learner might be better achieved by enabling them to compare\ntheir own behavior with others, and we aim to make some aggregated data\navailable to others for research purposes. Proper anonymisation techniques need\nto be applied in order to ensure that external parties cannot infer information\nabout speci\fc learners from having access to those tools and data.\nBeyond privacy however, it is also important to ensure that the e\u000bect of the\ntool might not turn out to be negative. Existing work have shown a number\nof ethical harms that might come out of enabling self-governance in a number\nof domains, despite the obvious positive e\u000bects [24]. Those include introducing\nbiases towards common learning behaviors or pushing learners towards excessive\nbehaviors for the purpose of improving the values of indicators that are necessar-\nily only approximate representations of learning. Activities within and connected\nto the AFEL project have for speci\fc objective to tackle those aspects, through\nestablishing contrasting scenarios of the possible e\u000bect of self-tracking tools as\na basis to engage with users of those tools about the ways to avoid the negative\ne\u000bects while keeping the positive ones.\nAcknowledgement\nThis work has received funding from the European Union's Horizon 2020 re-\nsearch and innovation programme as part of the AFEL (Analytics for Everyday\nLearning) project under grant agreement No 687916.\nReferences\n1. A. Adamou and M. dAquin. On requirements for federated data integration as a\ncompilation process. In Proceedings of the PROFILES 2015 workshop , 2015.\n2. U. Cress and J. Kimmerle. A systemic and cognitive view on collaborative knowl-\nedge building with wikis. International Journal of Computer-Supported Collabora-\ntive Learning , 2(3), 2008.\n3. S. A. Crossley, J. Green\feld, and D. S. McNamara. Assessing text readability\nusing cognitively based indices. Tesol Quarterly , 42(3):475{493, 2008.\n4. M. De Laat and F. R. Prinsen. Social learning analytics: Navigating the changing\nsettings of higher education. Research & Practice in Assessment , 9, 2014.\nAFEL: Measuring Self-Directed Online Learning 15\n5. S. Dietze, S. Sanchez-Alonso, H. Ebner, H. Qing Yu, D. Giordano, I. Marenzi, and\nB. Pereira Nunes. Interlinking educational resources and the web of data: A survey\nof challenges and approaches. Program , 47(1):60{91, 2013.\n6. R. Ferguson. Learning analytics: drivers, developments and challenges. Interna-\ntional Journal of Technology Enhanced Learning , 4(5-6):304{317, 2012.\n7. I. Hasani-Mavriqi, F. Geigl, S. C. Pujari, E. Lex, and D. Helic. The in\ruence of so-\ncial status and network structure on consensus building in collaboration networks.\nSocial Network Analysis and Mining , 6(1):80, 2016.\n8. J. Kimmerle, J. Moskaliuk, A. Oeberst, and U. Cress. Learning and collective\nknowledge construction with social media: A process-oriented perspective. Educa-\ntional Psychologist , (50), 2015.\n9. S. Kopeinik, E. Lex, P. Seitlinger, D. Albert, and T. Ley. Supporting collabora-\ntive learning with tag recommendations: a real-world study in an inquiry-based\nclassroom project. In LAK , pages 409{418, 2017.\n10. J. Lave and E. Wenger. Situated learning: Legitimate peripheral participation .\nCambridge University Press, 1991.\n11. C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. Mc-\nClosky. The stanford corenlp natural language processing toolkit. In ACL (System\nDemonstrations) , pages 55{60, 2014.\n12. V. Marsick and K. Watkins. Lessons from informal and incidental learning. Man-\nagement learning: Integrating perspectives in theory and practice , 1997.\n13. C. McLoughlin and M. Lee. Personalised and self regulated learning in the web\n2.0 era: International exemplars of innovative pedagogy using social software. Aus-\ntralasian Journal of Educational Technology , 1(26), 2010.\n14. B. Mutlu, E. Veas, and C. Trattner. Vizrec: Recommending personalized visual-\nizations. ACM Trans. Interact. Intell. Syst. , 6(4):31:1{31:39, Nov. 2016.\n15. G. Ne\u000b and D. Nafus. The Self-Tracking . MIT Press, 2016.\n16. A. Oeberst, J. Kimmerle, and U. Cress. What is knowledge? who creates it? who\npossesses it? the need for novel answers to old questions. Mass collaboration and\neducation , 2016.\n17. S. Paavola, L. Lipponen, and K. Hakkarainen. Models of innovative knowledge\ncommunities and three metaphors of learning. Review of Educational Research ,\n4(74), 2004.\n18. P. Seitlinger, D. Kowald, S. Kopeinik, I. Hasani-Mavriqi, T. Ley, and E. Lex. At-\ntention please! a hybrid resource recommender mimicking attention-interpretation\ndynamics. In Proceedings of the 24th International Conference on World Wide\nWeb, WWW '15 Companion, pages 339{345, New York, NY, USA, 2015. ACM.\n19. A. Sfard. On two metaphors for learning and the dangers of choosing just one.\nEducational Researcher , 2(27), 1998.\n20. S. B. Shum and R. Ferguson. Social learning analytics. Journal of educational\ntechnology & society , 15(3):3, 2012.\n21. M. Simon. Reconstructing mathematics pedagogy from a constructivist perspec-\ntive. Journal for Research in Mathematics Education , 2(26), 1995.\n22. D. Stanisavljevic, I. Hasani-Mavriqi, E. Lex, M. Strohmaier, and D. Helic. Semantic\nstability in wikipedia. In International Workshop on Complex Networks and their\nApplications , pages 379{390. Springer, 2016.\n23. H. Stubb\u0013 e and N. Theunissen. Self-directed adult learning in a ubiquitous learning\nenvironment: A meta-review. In Proceedings of the First Workshop on Technology\nSupport for Self-Organized Learners , 2008.\n24. J. R. Whitson. Foucaults \ftbit: Governance and gami\fcation. The Gameful World\n- Approaches, Issues, Applications , 2014.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "UiObT719zlL", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=UiObT719zlL", "arxiv_id": null, "doi": null }
{ "title": "Response to reviewer e37p", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "4Z2Jx3Exw53", "year": null, "venue": null, "pdf_link": null, "forum_link": "https://openreview.net/forum?id=4Z2Jx3Exw53", "arxiv_id": null, "doi": null }
{ "title": "Response to Reviewer e1R7", "authors": [], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "VJyM0nAqMJ1", "year": null, "venue": "Bull. EATCS 2018", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/524/515", "forum_link": "https://openreview.net/forum?id=VJyM0nAqMJ1", "arxiv_id": null, "doi": null }
{ "title": "Alonzo Church Award 2018 - Call for Nominations", "authors": [ "Thomas Eiter", "Javier Esparza", "Catuscia Palamidessi", "Gordon D. Plotkin", "Natarajan Shankar" ], "abstract": "Alonzo Church Award 2018 - Call for Nominations", "keywords": [], "raw_extracted_content": "Alonzo Church Award 2018\nCall for Nominations\nDeadline : March 1, 2018.\nIntroduction:\nAn annual award, called the \"Alonzo Church Award for Outstanding Contri-\nbutions to Logic and Computation\" was established in 2015 by the ACM Special\nInterest Group for Logic and Computation (SIGLOG), the European Association\nfor Theoretical Computer Science (EATCS), the European Association for Com-\nputer Science Logic (EACSL), and the Kurt Gödel Society (KGS). The award\nis for an outstanding contribution represented by a paper or by a small group of\npapers published within the past 25 years. This time span allows the lasting im-\npact and depth of the contribution to have been established. The award can be\ngiven to an individual, or to a group of individuals who have collaborated on the\nresearch. For the rules governing this award, see: http: //siglog.org /awards /alonzo-\nchurch-award. The 2017 Alonzo Church Award was given jointly to Samson\nAbramsky, Radha Jagadeesan, Pasquale Malacaria, Martin Hyland, Luke Ong,\nand Hanno Nickau for providing a fully-abstract semantics for higher-order com-\nputation through the introduction of game models, see: http: //siglog.org /winners-\nof-the-2017-alonzo-church-award\nEligibility and Nominations:\nThe contribution must have appeared in a paper or papers published within the\npast 25 years. Thus, for the 2018 award, the cut-o \u000bdate is January 1, 1993. When\na paper has appeared in a conference and then in a journal, the date of the journal\npublication will determine the cut-o \u000bdate. In addition, the contribution must not\nyet have received recognition via a major award, such as the Turing Award, the\nKanellakis Award, or the Gödel Prize. (The nominee(s) may have received such\nawards for other contributions.) While the contribution can consist of conference\nor journal papers, journal papers will be given a preference.\nNominations for the 2018 award are now being solicited. The nominating\nletter must summarise the contribution and make the case that it is fundamen-\ntal and outstanding. The nominating letter can have multiple co-signers. Self-\nnominations are excluded. Nominations must include: a proposed citation (up\nto 25 words); a succinct (100-250 words) description of the contribution; and a\ndetailed statement (not exceeding four pages) to justify the nomination. Nom-\ninations may also be accompanied by supporting letters and other evidence of\nworthiness.\nNominations are due by March 1, 2018, and should be submitted to\[email protected]\nPresentation of the Award:\nThe 2018 award will be presented at ICALP 2018, the International Collo-\nquium on Automata, Languages and Programming. The award will be accompa-\nnied by an invited lecture by the award winner, or by one of the award winners.\nThe awardee(s) will receive a certificate and a cash prize of USD 2,000. If there\nare multiple awardees, this amount will be shared.\nAward Committee:\nThe 2018 Alonzo Church Award Committee consists of the following five\nmembers:\n\u000fThomas Eiter\n\u000fJavier Esparza\n\u000fCatuscia Palamidessi (chair)\n\u000fGordon Plotkin\n\u000fNatarajan Shankar", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rdpyfEibZb", "year": null, "venue": "Bull. EATCS 2016", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/419/399", "forum_link": "https://openreview.net/forum?id=rdpyfEibZb", "arxiv_id": null, "doi": null }
{ "title": "EATCS Fellows' Advice to the Young Theoretical Computer Scientist", "authors": [ "Luca Aceto", "Mariangiola Dezani-Ciancaglini", "Yuri Gurevich", "David Harel", "Monika Henzinger", "Giuseppe F. Italiano", "Scott A. Smolka", "Paul G. Spirakis", "Wolfgang Thomas" ], "abstract": null, "keywords": [], "raw_extracted_content": "EATCS F ellows ’ Advice to the Young\nTheoretical Computer Scientist\nLuca Aceto (Reykjavik University)\nwith contributions by Mariangiola Dezani-Ciancaglini,\nYuri Gurevich, David Harel, Monika Henzinger,\nGiuseppe F. Italiano, Scott Smolka,\nPaul G. Spirakis and Wolfgang Thomas\nI have always enjoyed reading articles, interviews, blog posts and books in\nwhich top-class scientists share their experience with, and provide advice to,\nyoung researchers. In fact, despite not being young any more, alas, I feel that\nI invariably learn something new by reading those pieces, which, at the very least,\nremind me of the things that I should be doing, and that perhaps I am notdoing,\nto uphold high standards in my job.\nBased on my partiality for scientific advice and stories, it is not overly surpris-\ning that I was struck by the thought that it would be interesting to ask the EATCS\nFellows for\n\u000fthe advice they would give to a student interested in theoretical computer\nscience (TCS),\n\u000fthe advice they would give to a young researcher in TCS and\n\u000fa short description of a research topic that excites them at this moment in\ntime (and possibly why).\nIn this article, whose title is inspired by the classic book Advice To A Young Scien-\ntistauthored by the Nobel Prize winner Sir Peter Brian Medawar, I collect the an-\nswers to the above-listed questions I have received from some of the EATCS Fel-\nlows. The real authors of this piece are Mariangiola Dezani-Ciancaglini (Univer-\nsity of Turin), Yuri Gurevich (Microsoft Research), David Harel (Weizmann Insti-\ntute of Science), Monika Henzinger (University of Vienna), Giuseppe F. Italiano\n(University of Rome Tor Vergata), Scott Smolka (Stony Brook University), Paul\nG. Spirakis (University of Liverpool, University of Patras and Computer Tech-\nnology Institute & Press “Diophantus”, Patras) and Wolfgang Thomas (RWTH\nAachen University), whom I thank for their willingness to share their experience\nand wisdom with all the members of the TCS community. In an accompanying\nessay, which follows this one in this issue of the Bulletin, you will find the piece\nI received from Michael Fellows (University of Bergen).\nThe EATCS Fellows are model citizens of the TCS community, have varied\nwork experiences and backgrounds, and span a wide spectrum of research areas.\nOne can learn much about our field of science and about academic life in general\nby reading their thoughts. In order to preserve the spontaneity of their contribu-\ntions, I have chosen to present them in an essentially unedited form. I hope that\nthe readers of this article will enjoy them as much as I have done.\nMariangiola Dezani-Ciancaglini\nThe advice I would give to a student interested in TCS is: Your studies will be\nsatisfactory only if understanding for you is fun, not a duty.\nTo a young researcher in TCS I would say, “Do not be afraid if you do not see\napplications of the theory you are investigating: the history of computer science\nshows that elegant theories developed with passion will have eventually long-\nlasting success.”\nA research topic that currently excites me is the study of behavioural types.\nThese types allow for fine-grained analysis of communication-centred computa-\ntions. The new generation of behavioural types should allow programmers to\nwrite the certified, self-adapting and autonomic code that the market is requiring.\nYuri Gurevich\nAdvice I would give to a student interested in TCS Attending math seminars\n(mostly in my past), I noticed a discord. Experts in areas like complex analysis\nor PDEs (partial di \u000berential equations) typically presume that everybody knows\nFourier transforms, di \u000berential forms, etc., while logicians tend to remind the au-\ndience of basic definitions (like what’s first-order logic) and theorems (e.g. the\ncompactness theorem). Many talented mathematicians didn’t take logic in their\ncollege years, and they need those reminders. How come? Why don’t they ef-\nfortlessly internalize those definitions and theorems once and for all? This is not\nbecause those definitions and theorems are particularly hard (they are not) but\nbecause they are radically di \u000berent from what they know. It is easier to learn radi-\ncally di \u000berent things — whether it is logic or PDEs or AI — in your student years.\nOpen your mind and use this opportunity!\nAdvice I would give a young researcher in TCS As the development of\nphysics caused a parallel development of physics-applied mathematics, so the de-\nvelopment of computer science and engineering causes a parallel development of\ntheoretical computer science. TCS is an applied science. Applications justify it\nand give it value. I would counsel to take applications seriously and honestly.\nNot only immediate applications, but also applications down the line. Of course,\nlike in mathematics, there are TCS issues of intrinsic value. And there were cases\nwhen the purest mathematics eventually was proven valuable and applied. But in\nmost cases, potential applications not only justify research but also provide guid-\nance of sorts. Almost any subject can be developed in innumerable ways. But\nwhich of those ways are valuable? The application guidance is indispensable.\nI mentioned computer engineering above for a reason. Computer science is\ndi\u000berent from natural science like physics, chemistry, biology. Computers are\nartifacts, not “naturefacts.” Hence the importance of computer science and engi-\nneering as a natural area whose integral part is computer science.\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) Right now, the topics that excite me most are quantum\nmechanics and quantum computing. I wish I could say that this is the result of a\nnatural development of my research. But this isn’t so. During my long career, I\nmoved several times from one area to another. Typically it was natural; e.g. the\ntheory of abstract state machines developed in academia brought me to industry.\nBut the move to quanta was spontaneous. There was an opportunity (they started a\nnew quantum group at the Microsoft Redmond campus a couple of years ago), and\nI jumped upon it. I always wanted to understand quantum theory but occasional\nreading would not help as my physics had been poor to none and I haven’t been\nexposed much to the mathematics of quantum theory. In a sense I am back to\nbeing a student and discovering a new world of immense beauty and mystery,\nexcept that I do not have the luxury of having time to study things systematically.\nBut that is fine. Life is full of challenges. That makes it interesting.\nDavid Harel\nAdvice I would give to a student interested in TCS If you are already enrolled\nin a computer science program, then unless you feel you are of absolutely stellar\ntheoretical quality and the real world and its problems do not attract you at all,\nI’d recommend that you spend at least 2 =3 of your course e \u000borts on a variety of\ntopics related to TCS but not “theory for the sake of theory”. Take lots of courses\non languages, verification AI, databases, systems, hardware, etc. But clearly don’t\nshy away from pure mathematics. Being well-versed in a variety of topics in\nmathematics can only do you good in the future. If you are still able to choose a\nstudy program, go for a combination: TCS combined with software and systems\nengineering, for example, or bioinformatics /systems biology. I feel that computer\nscience (not just programming, but the deep underlying ideas of CS and systems)\nwill play a role in the science of the 21st century (which will be the century of\nthe life sciences) similar to that played by mathematics in the science of the 20th\ncentury (which was the century of the physical sciences).\nAdvice I would give a young researcher in TCS Much of the above is relevant\nto young researchers too. Here I would add the following two things. First, if you\nare doing pure theory, then spend at least 1 =3 of your time on problems that are\nsimpler than the real hard one you are trying to solve. You might indeed succeed in\nsettling the P =NP? problem, or the question of whether PTIME on general finite\nstructures is r.e., but you might not. Nevertheless, in the latter case you’ll at least\nhave all kinds of excellent, if less spectacular, results under your belt. Second,\nif you are doing research that is expected to be of some practical value, go talk\nto the actual people “out there”: engineers, programmers, system designers, etc.\nConsult for them, or just sit with them and see their problems first-hand. There is\nnothing better for good theoretical or conceptual research that may have practical\nvalue than dirtying your hands in the trenches.\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) I haven’t done any pure TCS for 25 years, although\nin work my group and I do on languages and software engineering there is quite\na bit of theory too, as is the case in our work on biological modeling. However,\nfor many years, I’ve had a small but nagging itch for trying to make progress on\nthe problem of artificial olfaction — the ability to record and remotely produce\nfaithful renditions of arbitrary odors. This is still a far-from-solved issue, and is\nthe holy grail of the world of olfaction. Addressing it involves chemistry, biology,\npsychophysics, engineering, mathematics and algorithmics (and is a great topic\nfor young TCS researchers!). More recently, I’ve been thinking about the question\nof how to test the validity of a candidate olfactory reproduction system, so that we\nhave an agreed-upon criterion of success for when such systems are developed. It\nis a kind of common-sense question, but one that appears to be very interesting,\nand not unlike Turing’s 1950 quest for testing AI, even though such systems were\nnowhere in sight at the time. In the present case, trying to compare testing artificial\nolfaction to testing the viability of sight and sound reproduction will not work, for\nmany reasons. After struggling with this for quite a while, I now have a proposal\nfor such a test, which is under review.\nMonika Henzinger\n\u000fStudents interested in TCS should really like their classes in TCS and be\ngood at mathematics.\n\u000fI advice young researchers in TCS to try to work on important problems\nthat have a relationship to real life.\n\u000fCurrently I am interested in understanding the exact complexity of di \u000berent\ncombinatorial problems in P(upper and lower bounds).\nGiuseppe F. Italiano\nThe advice I would give to a student interested in TCS There’s a great quote\nby Thomas Huxley: “Try to learn something about everything and everything\nabout something.” When working through your PhD, you might end up focusing\non a narrow topic so that you will fully understand it. That’s really great! But one\nof the wonderful things about Theoretical Computer Science is that you will still\nhave the opportunity to learn the big picture!\nThe advice I would give a young researcher in TCS Keep working on the\nproblems you love, but don’t be afraid to learn things outside of your own area.\nOne good way to learn things outside your area is to attend talks (and even con-\nferences) outside your research interests. You should always do that!\nA short description of a research topic that excites me at this moment in\ntime (and possibly why) I am really excited by recent results on conditional\nlower bounds, sparkled by the work of Virginia Vassilevska Williams et al. It\nis fascinating to see how a computational complexity conjecture such as SETH\n(Strong Exponential Time Hypothesis) had such an impact on the hardness results\nfor many well-known basic problems.\nScott Smolka\nAdvice I would give to a student interested in TCS Not surprising, it all starts\nwith the basics: automata theory, formal languages, algorithms, complexity the-\nory, programming languages and semantics.\nAdvice I would give a young researcher in TCS Go to conferences and estab-\nlish connections with more established TCS researchers. Seek to work with them\nand see if you can arrange visits at their home institutions for a few months.\nA short description of a research topic that excites me at this moment in time\n(and possibly why) Bird flocking and V-formation are topics I find very excit-\ning. Previous approaches to this problem focused on models of dynamic behavior\nbased on simple rules such as: Separation (avoid crowding neighbors), Alignment\n(steer towards average heading of neighbors), and Cohesion (steer towards aver-\nage position of neighbors). My collaborators and I are instead treating this as a\nproblem of Optimal Control, where the fitness function takes into account Velocity\nMatching (alignment), Upwash Benefit (birds in a flock moving into the upwash\nregion of the bird(s) in front of them), and Clear View (birds in the flock having\nunobstructed views). What’s interesting about this problem is that it is inherently\ndistributed in nature (a bird can only communicate with its nearest neighbors),\nand one can argue that our approach more closely mimics the neurological pro-\ncess birds use to achieve these formations.\nPaul G Spirakis\nMy advice to a student interested in TCS Please be sure that you really like\nTheory! The competition is high, you must love mathematics, and the money\nprospects are usually not great. The best years of life are the student years. Theory\nrequires dedication. Are you ready for this?\nGiven the above, try to select a good advisor (with whom you can interact well\nand frequently). The problem you choose to work on should psyche you and your\nadvisor!\nIt is good to obtain a spherical and broad knowledge of the various Theory\nsubdomains. Surprisingly, one subfield a \u000bects another in unexpected ways.\nFinally, study and work hard and be up to date with respect to results and\ntechniques!\nMy advice to a young researcher interested in TCS Almost all research prob-\nlems have some di \u000eculty. But not all of them are equally important! So, please\nselect your problems to solve carefully! Ask yourself and others: why is this a\nnice problem? Why is it interesting and to which community? Be strategic!\nAlso, a problem is good if it is manageable in a finite period of time. This\nmeans that if you try to solve something open for many years, be sure that you\nwill need great ideas, and maybe lots of time! However, be ambitious! Maybe\nyou will get the big solution! The issue of ambition versus reasonable progress is\nsomething that you must discuss with yourself!\nIt is always advisable to have at least two problems to work on, at any time.\nWhen you get tired from the main front, you turn on your attention to the other\nproblem.\nTry to interact and to announce results frequently, if possible in the best fo-\nrums. Be visible! It is important that other good people know about you. “Speak\nout to survive!”\nStudy hard and read the relevant literature in depth. Try to deeply understand\ntechniques and solution concepts and methods. Every paper you read may lead\nto a result of yours if you study it deeply and question every line carefully! Find\nquiet times to study hard. Control your time!\nA field that excites me: the discrete dynamics of probabilistic (finite) popu-\nlation protocols Population Protocols are a recent model of computation that\ncaptures the way in which complex behavior of systems can emerge from the un-\nderlying local interactions of agents. Agents are usually anonymous and the local\ninteraction rules are scalable (independent of the size, n, of the population). Such\nprotocols can model the antagonism between members of several “species” and\nrelate to evolutionary games.\nIn the recent past I was involved in joint research studying the discrete dynam-\nics of cases of such protocols for finite populations. Such dynamics are, usually,\nprobabilistic in nature, either due to the protocol itself or due to the stochastic na-\nture of scheduling local interactions. Examples are (a) the generalized Moran pro-\ncess (where the protocol is evolutionary because a fitness parameter is crucially\ninvolved) (b) the Discrete Lotka-V olterra Population Protocols (and associated\nCyclic Games) and (c) the Majority protocols for random interactions.\nSuch protocols are usually discrete time transient Markov Chains. However\nthe detailed states description of such chains is exponential in size and the state\nequations do not facilitate a rigorous approach. Instead, ideas related to filtering,\nstochastic domination and Potentials (leading to Martingales) may help in under-\nstanding the dynamics of the protocols.\nSome such dynamics can describe strategic situations (games): Examples in-\nclude Best-Response Dynamics, Peer-to-Peer Market dynamics, fictitious play\netc.\nSuch dynamics need rigorous approaches and new concepts and techniques.\nThe ‘traditional’ approach with di \u000berential equations (found in e.g. evolutionary\ngame theory books) is not enough to explain what happens when such dynamics\ntake place (for example) in finite graphs with the players in the nodes and with\ninteractions among neighbours. Some main questions are: How fast do such dy-\nnamics converge? What is a ‘most probable’ eventual state of the protocols (and\nthe computation of the probability of such states). In case of game dynamics, what\nis the kind of ‘equilibria’ to which they converge? Can we design ‘good’ discrete\ndynamics (that converge fast and go to desirable stable states ?). What is the\ncomplexity of predicting most probable or eventual behaviour in such dynamics?\nSeveral aspects of such discrete dynamics are wide open and it seems that the\nalgorithmic thought can contribute to the understanding of this emerging subfield\nof science.\nWolfgang Thomas, “Views on work in TCS”\nAs one of the EATCS fellows I have been asked to contribute some personal words\nof advice for younger people and on my research interests. Well, I try my best.\nRegarding advice to a student and young researcher interested in TCS, I start\nwith two short sentences:\n\u000fRead the great masters (even when their h-index is low).\n\u000fDon’t try to write ten times as many papers as a great master did.\nAnd then I add some words on what influenced me when I started research — you\nmay judge whether my own experiences that go back to “historical” times would\nstill help you.\nBy the way, advice from historical times, where blackboards and no projectors\nwere used, posed in an entertaining but clearly wise way, is Gian-Carlo Rota’s pa-\nper “Ten Lessons I Wish I Had Been Taught” ( http://www.ams.org/notices/\n199701/comm-rota.pdf ). This is a view of a mathematician but still worth read-\ning and delightful for EATCS members. People like me (68 years old) are also\naddressed — in the last lesson “Be Prepared for Old Age”. . .\nBack in the 1970’s when I started I wanted to do something relevant. For\nme this meant that there should be some deeper problems involved, and that the\nsubject of study is of long-term interest. I was attracted by the works of Büchi and\nRabin just because of this: That was demanding, and it treated structures that will\nbe important also in hundred years: the natural numbers with successor, and the\ntree of all words (over some alphabet) with successor functions that represent the\nattachment of letters.\nThe next point is a variation of this. It is a motto I learnt from Büchi, and it\nis a warning not to join too small communities where the members just cite each\nother. In 1977, when he had seen my dissertation work, Büchi encouraged me to\ncontinue but also said: Beware of becoming member of an MAS, and he explained\nthat this means “mutual admiration society”. I think that his advice was good.\nI am also asked to say something about principles for the postdoctoral phase.\nIt takes determination and devotion to enter it. I can say just two things, from my\nown experience as a young person and from later times. First, as it happens with\nmany postdocs, in my case it was unclear up to the very last moment whether I\nwould get a permanent position. In the end I was lucky. But it was a strain. I al-\nready prepared for a gymnasium teacher’s career. And when on a scientific party\nI spoke to Saharon Shelah (one of the giants of model theory) about my worries,\nhe said “well, there is competition”. How true. So here I just say: Don’t give\naway your hopes — and good luck. The other point is an observation from my\ntime as a faculty member, and it means that good luck may be actively supported.\nWhen a position is open the people in the respective department do not just want\na brilliant researcher and teacher but also a colleague. So it is an important ad-\nvantage when one can prove that one has more than just one field where one can\nactively participate, that one can enter new topics (which anyway is necessary in\na job which lasts for decades), and that one can cooperate (beyond an MAS). So\nfor the postdoc phase this means to look for a balance between work on your own\nand work together with others, and if possible in di \u000berent teams of cooperation.\nFinally, a comment on a research topic that excites me at this moment. I find it\ninteresting to extend more chapters of finite automata theory to the infinite. This\nhas been done intensively in two ways already — we know automata with infinite\n“state space” (e.g., pushdown automata where “states” are combined from control\nstates and stack contents), and we know automata over infinite words (infinite\nsequences of symbols from a finite alphabet). Presently I am interested in words\n(or trees or other objects) where the alphabet is infinite, for example where a letter\nis a natural number, and in general where the alphabet is given by an infinite\nmodel-theoretic structure. Infinite words over the alphabet Nare well known in\nmathematics since hundred years (they are called points of the Baire space there).\nIn computer science, one is interested in algorithmic results which have not been\nthe focus in classical set theory and mathematics, so much is to be done here.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "kpcz34Px6h", "year": null, "venue": "Bull. EATCS 2006", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=kpcz34Px6h", "arxiv_id": null, "doi": null }
{ "title": "Eight Open Problems in Distributed Computing", "authors": [ "James Aspnes", "Costas Busch", "Shlomi Dolev", "Panagiota Fatourou", "Chryssis Georgiou", "Alexander A. Shvartsman", "Paul G. Spirakis", "Roger Wattenhofer" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qX0hZyJ9M9l", "year": null, "venue": "Bull. EATCS 2016", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=qX0hZyJ9M9l", "arxiv_id": null, "doi": null }
{ "title": "Viewpoints on \"Logic activities in Europe\", twenty years later", "authors": [ "Luca Aceto", "Thomas A. Henzinger", "Joost-Pieter Katoen", "Wolfgang Thomas", "Moshe Y. Vardi" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "pw1dcNPcS5zR", "year": null, "venue": "Bull. EATCS 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=pw1dcNPcS5zR", "arxiv_id": null, "doi": null }
{ "title": "Report on SEA 2018", "authors": [ "Gianlorenzo D'Angelo", "Mattia D'Emidio" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "KieghpToLy", "year": null, "venue": "Bull. EATCS 2018", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=KieghpToLy", "arxiv_id": null, "doi": null }
{ "title": "Phase Transition of the 2-Choices Dynamics on Core-Periphery Networks", "authors": [ "Emilio Cruciani", "Emanuele Natale", "André Nusser", "Giacomo Scornavacca" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LAryfwb0NF", "year": null, "venue": "Bull. EATCS 2022", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/735/777", "forum_link": "https://openreview.net/forum?id=LAryfwb0NF", "arxiv_id": null, "doi": null }
{ "title": "Interviews with the 2022 CONCUR Test-of-Time Award Recipients", "authors": [ "Luca Aceto", "Orna Kupferman", "Mickael Randour", "Davide Sangiorgi" ], "abstract": null, "keywords": [], "raw_extracted_content": "Interviews with the 2022 CONCUR\nTest-of-TimeAward Recipients\nLuca Aceto\nICE-TCS, Department of Computer Science,\nReykjavik University\nGran Sasso Science Institute, L’Aquila\[email protected] ,[email protected]\nOrna Kupferman\nSchool of Computer Science and Engineering\nHebrew University, Jerusalem\[email protected]\nMickael Randour\nFaculty of Science, Mathematics Department\nUniversité de Mons\[email protected]\nDavide Sangiorgi\nDepartment of Computer Science, University of Bologna\[email protected]\nIn 2020, the CONCUR conference series instituted its Test-of-Time Award,\nwhose purpose is to recognise important achievements in Concurrency Theory\nthat were published at the CONCUR conference and have stood the test of time.\nThis year, the following four papers were chosen to receive the CONCUR Test-\nof-Time Awards for the periods 1998–2001 and 2000–2003 by a jury consisting\nof Ilaria Castellani (chair), Paul Gastin, Orna Kupferman, Mickael Randour and\nDavide Sangiorgi. (The papers are listed in chronological order.)\n\u000fChristel Baier, Joost-Pieter Katoen and Holger Hermanns. Approximate\nsymbolic model checking of continuous-time Markov chains. CONCUR\n1999.\n\u000fFranck Cassez and Kim Guldstrand Larsen. The Impressive Power of Stop-\nwatches. CONCUR 2000.\n\u000fJames J. Leifer and Robin Milner. Deriving Bisimulation Congruences for\nReactive Systems. CONCUR 2000.\n\u000fLuca de Alfaro, Marco Faella, Thomas A. Henzinger, Rupak Majumdar and\nMariëlle Stoelinga. The Element of Surprise in Timed Games. CONCUR\n2003.\nThis article is devoted to interviews with the recipients of the Test-of-Time Award.\nMore precisely,\n\u000fOrna Kupferman interviewed Christel Baier, Joost-Pieter Katoen and Hol-\nger Hermanns;\n\u000fLuca Aceto interviewed Franck Cassez and Kim Guldstrand Larsen;\n\u000fDavide Sangiorgi interviewed James Leifer; and\n\u000fLuca Aceto and Mickael Randour jointly interviewed Luca de Alfaro, Mar-\nco Faella, Thomas A. Henzinger, Rupak Majumdar and Mariëlle Stoelinga.\nWe are very grateful to the awardees for their willingness to answer our questions\nand hope that the readers of this article will enjoy reading the interviews as much\nas we did.\nInterview with C. Baier, J.-P. Katoen and H. Her-\nmanns\nIn what follows, BHK refers to Baier, Katoen and Hermanns.\nOrna: You receive the CONCUR Test-of-Time Award 2022 for your paper “Ap-\nproximate symbolic model checking of continuous-time Markov chains,” which\nappeared at CONCUR 19981. In that article, you combine three di \u000berent chal-\nlenges: symbolic algorithms, real-time systems, and probabilistic systems. Could\nyou briefly explain to our readers what the main challenge in such a combination\nis?\nBHK: The main challenge is to provide a fixed-point characterization of time-\nbounded reachability probabilities: the probability to reach a given target state\n1Seehttps://link.springer.com/content/pdf/10.1007/3-540-48320-9_12.pdf .\nwithin a given deadline. Almost all works in the field up to 1999 treated discrete-\ntime probabilistic models and focused on “just” reachability probabilities: what is\nthe probability to eventually end up in a given target state? This can be character-\nized as a unique solution of a linear equation system. The question at stake was:\nhow to incorporate a real-valued deadline d? The main insight was to split the\nproblem in staying a certain amount of time, xsay, in the current state and using\nthe remaining d\u0000xtime to reach the target from its successor state. This yields a\nV olterra integral equation system; indeed time-bounded reachability probabilities\nare unique solutions of such equation systems. In the CONCUR 1999 paper we\nsuggested to use symbolic data structures to do the numerical integration; later we\nfound out that much more e \u000ecient techniques can be applied.\nOrna: Could you tell us how you started your collaboration on the award-winning\npaper? In particular, as the paper combines three di \u000berent challenges, is it the case\nthat each of you has brought to the research di \u000berent expertise?\nBHK: Christel and Joost-Pieter were both in Birmingham, where a meeting of a\ncollaboration project between German and British research groups on stochastic\nsystems and process algebra took place. There the first ideas of model checking\ncontinuous-time Markov chains arose, especially for time-bounded reachability:\nwith stochastic process algebras there were means to model CTMCs in a compo-\nsitional manner, but verification was lacking. Back in Germany, Holger suggested\nto include a steady-state operator, the counterpart of transient properties that can\nbe expressed using timed reachability probabilities. We then also developed the\nsymbolic data structure to support the verification of the entire logic.\nOrna: Your contribution included a generalization of BDDs (binary decision dia-\ngrams) to MTDDs (multi-terminal decision diagrams), which allow both Boolean\nand real-valued variables. What do you think about the current state of symbolic\nalgorithms, in particular the choice between SAT-based methods and methods that\nare based on decision diagrams?\nBHK: BDD-based techniques entered probabilistic model checking in the mid\n1990’s for discrete-time models such as Markov chains. Our paper was one of\nthe first, perhaps even the first, that proposed to use BDD structures for real-time\nstochastic processes. Nowadays, SAT and in particular SMT-based techniques be-\nlong to the standard machinery in probabilistic model checking. SMT techniques\nare, e.g., used in bisimulation minimization at the language level, counterexample\ngeneration, and parameter synthesis. This includes both linear as well as non-\nlinear theories. BDD techniques are still used, mostly in combination with sparse\nrepresentations, but it is fair to say that SMT is becoming more and more relevant.\nOrna: What are the research topics that you find most interesting right now? Is\nthere any specific problem in your current field of interest that you’d like to see\nsolved?\nBHK: This depends a bit on whom you ask! Christel’s recent work is about cause-\ne\u000bect reasoning and notions of responsibility in the verification context. This ties\ninto the research interest of Holger who looks at the foundations of perspicuous\nsoftware systems. This research is rooted in the observation that the explosion of\nopportunities for software-driven innovations comes with an implosion of human\nopportunities and capabilities to understand and control these innovations. Joost-\nPieter focuses on pushing the borders of automation in weakest-precondition rea-\nsoning of probabilistic programs. This involves loop invariant synthesis, prob-\nabilistic termination proofs, the development of deductive verifiers, and so forth.\nChallenges are to come up with good techniques for synthesizing quantitative loop\ninvariants, or even complete probabilistic programs.\nOrna: What advice would you give to a young researcher who is keen to start\nworking on topics related to symbolic algorithms, real-time systems, and proba-\nbilistic systems?\nBHK: Try to keep it smart and simple.\nInterview with Franck Cassez and Kim Guldstrand\nLarsen\nLuca: You receive the CONCUR Test-of-Time Award 2022 for your paper “The\nImpressive Power of Stopwatches”2, which appeared at CONCUR 2000. In that\narticle, you showed that timed automata enriched with stopwatches and unob-\nservable time delays have the same expressive power of linear hybrid automata.\nCould you briefly explain to our readers what timed automata with stopwatches\nare? Could you also tell us how you came to study the question addressed in\nyour award-winning article? Which of the results in your paper did you find most\nsurprising or challenging?\nKim: Well, in timed automata all clocks grow with rate 1 in all locations of the\nautomata. Thus you can tell the amount of time that has elapsed since a particular\nclock was last reset, e.g., due to an external event of interest. A stopwatch is a real-\nvalued variable similar to a regular clock. In contrast to a clock, a stopwatch will\nin certain locations grow with rate 1 and in other locations grow with rate 0, i.e.,\nit is stopped. As such, a stopwatch gives you information about the accumulated\ntime spent in a certain parts of the automata.\nIn modelling schedulability problems for real-time systems, the use of stop-\nwatches is crucial in order to adequately capture preemption. I definitely believe\n2Seehttps://link.springer.com/content/pdf/10.1007/3-540-44618-4_12.pdf .\nthat it was our shared interest in schedulability that brought us to study timed\nautomata with stopwatches. We knew from earlier results by Alur et al. that prop-\nerties such as reachability was undecidable. But what could we do about this?\nAnd how much expressive power would the addition of stopwatches provide?\nIn the paper we certainly put the most emphasis on the latter question, in that\nwe showed that stopwatch automata and linear hybrid automata accept the same\nclass of timed languages, and this was at least for me the most surprising and\nchallenging result. However, focusing on impact, I think the approximate zone-\nbased method that we apply in the paper has been extremely important from the\npoint of view of having our verification tool UPPAAL being taken up at large by\nthe embedded systems community. It has been really interesting to see how well\nthe over-approximation method actually works.\nLuca: In your article, you showed that linear hybrid automata and stopwatch\nautomata accept the same class of timed languages. Would this result still hold if\nall delays were observable? Do the two models have the same expressive power\nwith respect to finer notions of equivalence such as timed bisimilarity, say? Did\nyou, or any other colleague, study that problem, assuming that it is an interesting\none?\nKim: These are definitely very interesting questions, and should be studied.\nAs for finer notions of equivalences, e.g., timed bisimilarity, I believe that our\ntranslation could be shown to be correct up to some timed variant of chunk-by-\nchunk simulation introduced by Anders Gammelgaard in his Licentiat Thesis from\nAarhus University in 19913. That could be a good starting point.\nLuca: Did any of your subsequent research build explicitly on the results and the\ntechniques you developed in your award-winning paper? Which of your subse-\nquent results on timed and hybrid automata do you like best? Is there any result\nobtained by other researchers that builds on your work and that you like in partic-\nular or found surprising?\nKim: Looking up in DBLP, I see that I have some 28 papers containing the word\n“scheduling”. For sure stopwatches will have been used in one way or another in\nthese. One thing that we never really examined thoroughly is to investigate how\nwell the approximate zone-based technique will work when applied to the trans-\nlation of linear hybrid automata into stopwatch automata. This would definitely\nbe interesting to find out.\nThis was the first joint publication between me and Franck. I enjoyed fully the\ncollaboration on all the next 10 joint papers. Here the most significant ones are\nprobably the paper at CONCUR 2005, where we presented the symbolic on-the-\nfly algorithms for synthesis for timed games and the branch UPPAAL TIGA. And\n3Seehttps://tidsskrift.dk/daimipb/article/view/6611/5733 .\nlater in a European project GASICS with Jean-Francois Raskin, we used TIGA in\nthe synthesis of optimal and robust control of a hydraulic system.\nFranck: Using the result in our paper, we can analyse scheduling problems where\ntasks can be stopped and restarted, using real-time model-checking and a tool like\nUPPAAL.\nTo do so, we build a network of stopwatch automata modelling the set of tasks\nand a scheduling policy, and reduce schedulability to a safety verification problem:\navoid reaching states where tasks do not meet their deadlines. Because we over-\napproximate the state space, our analysis may yield some false positives and may\nwrongly declare a set of tasks non-schedulable because the over-approximation is\ntoo coarse.\nIn the period 2003–2005, in cooperation with Francois Laroussinie we tried\nto identify some classes of stopwatch automata for which the over-approximation\ndoes not generate false positives. We never managed to find an interesting sub-\nclass.\nThis may look like a serious problem in terms of applicability of our result, but\nin practice, it does not matter too much. Most of the time, we are interested in the\nschedulability of a specific set of tasks (e.g., controlling a plant, a car, etc.) and\nfor these instances, we can use our result: if we have false positives, we can refine\nthe model tasks and scheduler and rule them out. Hopefully after a few iterations\nof refinement, we can prove that the set of tasks is schedulable.\nThe subsequent result on timed and hybrid automata of mine that I probably\nlike best is the one we obtained on solving optimal reachability in timed automata.\nWe had a paper at FSTTCS in 20044presenting the theoretical results, and a com-\npanion paper at GDV 20045with an implementation using HyTech, a tool for\nanalysing hybrid automata.\nI like these results because we ended up with a rather simple proof, after 3–4\nyears working on this hard problem.\nLuca: Could you tell us how you started your collaboration on the award-winning\npaper? I recall that Franck was a regular visitor to our department at Aalborg Uni-\nversity for some time, but I can’t recall how his collaboration with the UPPAAL\ngroup started.\nKim: I am not quite sure I remember how and when I first met Franck. For some\ntime we already worked substantially with French researchers, in particular from\nLSV Cachan (Francois Larroussinie and Patricia Bouyer). I have the feeling that\nthere were quite some strong links between Nantes (were Franck was) and LSV\non timed systems in those days. Also Nantes was the organizer of the PhD school\n4Seehttps://doi.org/10.1007/978-3-540-30538-5_13 .\n5Seehttps://doi.org/10.1016/j.entcs.2004.07.006 .\nMOVEP five times in the period 1994-2002, and I was lecturing there in one of the\nyears, meeting Olivier Roux and Franck who were the organizers. Funny enough,\nthis year we are organizing MOVEP in Aalborg. Anyway, at some point Franck\nbecame a regular visitor to Aalborg, often for long periods of time—playing on\nthe squash team of the city when he was not working.\nFranck: As Kim mentioned, I was in Nantes at that time, but I was working\nwith Francois Laroussinie who was in Cachan. Francois had spent some time in\nAalborg working with Kim and his group and he helped organise a mini workshop\nwith Kim in 1999, in Nantes. That’s when Kim invited me to spend some time\nin Aalborg, and I visited Aalborg University for the first time from October 1999\nuntil December 1999. This is when we worked on the stopwatch automata paper.\nWe wanted to use UPPAAL to verify systems beyond timed automata.\nI visited Kim and his group almost every year from 1999 until 2007, when I\nmoved to Australia. There were always lots of visitors at Aalborg University and\nI was very fortunate to be there and learn from the Masters.\nI always felt at home at Aalborg University, and loved all my visits there. The\nonly downside was that I never managed to defeat Kim at badminton. I thought it\nwas a gear issue, but Kim gave me his racket (I still have it) and the score did not\nchange much.\nLuca: What are the research topics that you find most interesting right now? Is\nthere any specific problem in your current field of interest that you’d like to see\nsolved?\nKim: Currently I am spending quite some time on marrying symbolic synthesis\nwith reinforcement learning for Timed Markov Decision Processes in order to\nachieve optimal as well as safe strategies for Cyber-Physical Systems.\nLuca: Both Franck and you have a very strong track record in developing theoret-\nical results and in applying them to real-life problems. In my, admittedly biased,\nopinion, your work exemplifies Ben Schneiderman’s Twin-Win Model6, which\npropounds the pursuit of “the dual goals of breakthrough theories in published pa-\npers and validated solutions that are ready for widespread dissemination.” Could\nyou say a few words on your research philosophy?\nKim: I completely subscribe to this. Several early theoretical findings, such as\nthe paper on stopwatch automata, have been key in our sustainable transfer to\nindustry.\nFranck: Kim has been a mentor to me for a number of years now, and I certainly\nlearned this approach /philosophy from him and his group.\n6Seehttps://www.pnas.org/doi/pdf/10.1073/pnas.1802918115 .\nWe always started from a concrete problem, e.g., scheduling tasks /checking\nschedulability, and to validate the solutions, building a tool to demonstrate appli-\ncability. The next step was to improve the tool to solve larger and larger problems.\nUPPAAL is a fantastic example of this philosophy: the reachability problem\nfor timed automata is PSPACE-complete. That would deter a number of people to\ntry and build tools to solve this problem. But with smart abstractions, algorithms\nand data-structures, and constant improvement over a number of years, UPPAAL\ncan analyse very large and complex systems. It is amazing to see how UPPAAL\nis used in several areas from tra \u000ec control to planning and to precisely guiding a\nneedle for an injection.\nLuca: What advice would you give to a young researcher who is keen to start\nworking on topics related to formal methods?\nKim: Come to Aalborg, and participate in next year’s MOVEP.\nInterview with James Leifer\nDavide: How did the work presented in your CONCUR Test-of-Time paper come\nabout?\nJames: I was introduced to Robin Milner by my undergraduate advisor Bernard\nSufrin around 1994. Thanks to that meeting, I started with Robin at Cambridge\nin 1995 as a fresh Ph.D. student. Robin had recently moved from Edinburgh and\nhad a wonderful research group, including, at various times, Peter Sewell, Adri-\nana Compagnoni, Benjamin Pierce, and Philippa Gardner. There were also many\ncolleagues working or visiting Cambridge interested in process calculi: Davide\nSangiorgi, Andy Gordon, Luca Cardelli, Martín Abadi,. . . . It was an exciting at-\nmosphere! I was particularly close to Peter Sewell, with whom I discussed the\nideas here extensively and who was generous with his guidance.\nThere was a trend in the community at the time of building complex process\ncalculi (for encryption, Ambients, etc.) where the free syntax would be quotiented\nby a structural congruence to “stir the soup” and allow di \u000berent parts of a tree\nto float together; reaction rules (unlabelled transitions) then would permit those\nagglomerated bits to react, to transform into something new.\nRobin wanted to come up with a generalised framework, which he called Ac-\ntion Calculi, for modelling this style of process calculi. His framework would\ndescribe graph-like “soups” of atoms linked together by arcs representing bind-\ning and sharing; moreover the atoms could contain subgraphs inside of them for\nfreezing activity (as in prefixing in the \u0019-calculus), with the possibility of bound-\nary crossing arcs (similarly to how \u0017-bound names in \u0019-calculus can be used in\ndeeply nested subterms).\nRobin had an amazing talent for drawing beautiful graphs! He would “move”\nthe nodes around on the chalkboard and reveal how a subgraph was in fact a\nreactum (the left-hand side of an unlabelled transition). In the initial phases of my\nPh.D. I just tried to understand these graphs: they were so natural to draw on the\nblackboard! And yet, they were also so uncomfortable to use when written out in\nlinear tree- and list-like syntax, with so many distinct concrete representations for\nthe same graph.\nPutting aside the beauty of these graphs, what was the benefit of this frame-\nwork? If one could manage to embed a process calculus in Action Calculi, using\nthe graph structure and fancy binding and nesting to represent the quotiented syn-\ntax, what then? We dreamt about a proposition along the following lines: if you\nrepresent your syntax (quotiented by your structural congruence) in Action Cal-\nculi graphs, and you represent your reaction rules as Action Calculi graph rewrites,\nthen we will give you a congruential bisimulation for free!\nCompared to CCS for example, many of the rich new process calculi lacked\nlabelled transitions systems. In CCS, there was a clean, simple notion of labelled\ntransitions and, moreover, bisimulation over those labelled transitions yielded a\ncongruence: for all processes PandQ, and all process contexts C[\u0000], ifP\u0018Q,\nthen C[P]\u0018C[Q]. This is a key quality for a bisimulation to possess, since it\nallows modular reasoning about pieces of a process, something that’s so much\nharder in a concurrent world than in a sequential one.\nReturning to Action Calculi, we set out to make good on the dream that ev-\neryone gets a congruential bisimulation for free! Our idea was to find a general\nmethod to derive labelled transitions systems from the unlabelled transitions and\nthen to prove that bisimulation built from those labelled transitions would be a\ncongruence.\nThe idea was often discussed at that time that there was a duality whereby a\nprocess undergoing a labelled transition could be thought of as the environment\nproviding a complementary context inducing the process to react. In the early\nlabelled transition system in \u0019-calculus for example, I recall hearing that Pun-\ndergoing the input labelled transition xycould be thought of as the environment\noutputting payload yon channel xto enable a \u001ctransition with P.\nSo I tried to formalise this notion that labelled transitions are environmental\ncontexts enabling reaction, i.e. defining PC[\u0000]!P0to mean C[P]!P0provided\nthatC[\u0000] was somehow “minimal”, i.e., contained nothing superfluous beyond\nwhat was necessary to trigger the reaction. We wanted to get a rigorous definition\nof that intuitive idea. There was a long and di \u000ecult period (about 12 months)\nwandering through the weeds trying to define minimal contexts for Action Calculi\ngraphs (in terms of minimal nodes and minimal arcs), but it was hugely complex,\nfrustrating, and ugly and we seemed no closer to the original goal of achieving\ncongruential bisimulation with these labelled transitions systems.\nEventually I stepped back from Action Calculi and started to work on a more\ntheoretical definition of “minimal context” and we took inspiration from category\ntheory. Robin had always viewed Action Calculi graphs as categorical arrows\nbetween objects (where the objects represented interfaces for plugging together\narcs). At the time, there was much discussion of category theory in the air (for\ngame theory); I certainly didn’t understand most of it but found it interesting and\ninspiring.\nIf we imagine that processes and process-contexts are just categorical arrows\n(where the objects are arities) then context composition is arrow composition.\nNow, assuming we have a reaction rule R!R0, we can define labelled transi-\ntions PC[\u0000]!P0as follows: there exists a context Dsuch that C[P]=D[R] and\nP0=D[R0]. The first equality is a commuting diagram and Robin and I thought\nthat we could formalise minimality by something like a categorical pushout! But\nthat wasn’t quite right as CandDare not the minimum pair (compared to all\nother candidates), but a minimal pair: there may be many incomparable mini-\nmal pairs all of which are witnesses of legitimate labelled transitions. There was\nagain a long period of frustration eventually resolved when I reinvented “relative\npushouts” (in place of pushouts). They are a simple notion in slice categories but\nI didn’t know that until later. . . .\nHaving found a reasonable definition of “minimal”, I worked excitedly on\nbisimulation, trying to get a proof of congruence: P\u0018Qimplies E[P]\u0018E[Q].\nFor weeks, I was considering the labelled transitions of E[P]F[\u0000]!and all the ways\nthat could arise. The most interesting case is when a part of P, a part of E, and Fall\n“conspire” together to generate a reaction. From that I was able to derive a labelled\ntransition of Pby manipulating relative pushouts, which by hypothesis yielded\na labelled transition of Q, and then, via a sort of “pushout pasting”, a labelled\ntransition E[Q]F[\u0000]!. It was a wonderful moment of elation when I pasted all the\ndiagrams together on Robin’s board and we realised that we had the congruence\nproperty for our synthesised labels!\nWe looked back again at Action Calculi, using the notion of relative pushouts\nto guide us (instead of the arbitrary approach we had considered before) and we\nfurther looked at other kinds of process calculi syntax to see how relative pushouts\ncould work there. . . . Returning to the original motivation to make Action Calculi\na universal framework with congruential bisimulation for free, I’m not convinced\nof its utility. But it was the challenge that led us to the journey of the relative\npushout work, which I think is beautiful.\nDavide: What influence did this work have in the rest of your career? How much\nof your subsequent work built on it?\nJames: It was thanks to this work that I visited INRIA Rocquencourt to discuss\nprocess calculi with Jean-Jacques Lévy and Georges Gonthier. They kindly in-\nvited me to spend a year as postdoc in 2001 after I finished my thesis with Robin,\nand I ended up staying in INRIA ever since. I didn’t work on bisimulation again\nas a research topic, but stayed interested in concurrency and distribution for a long\ntime, working with Peter Sewell et al. on distributed language design with module\nmigration and rebinding, and with Cédric Fournet et al. on compiler design for\nautomatically synthesising cryptographic protocols for high level sessions speci-\nfications.\nDavide: Could you tell us about your interactions with Robin Milner? What was\nit like to work with him? What lessons did you learn from him?\nJames: I was tremendously inspired by Robin.\nHe would stand at his huge blackboard, his large hands covered in chalk,\nhis bicycle clips glinting on his trousers, and he would stalk up and down the\nblackboard—thinking and moving. There was something theatrical and artistic\nabout it: his thinking was done in physical movement and his drawings were dy-\nnamic as the representations of his ideas evolved across the board.\nI loved his drawings. They would start simple, a circle for a node, a box\nfor a subgraph, etc. and then develop more and more detail corresponding to his\nintuition. (It reminded me of descriptions I had read of Richard Feynman drawing\nquantum interactions.)\nSometimes I recall being frustrated because I couldn’t read into his formulas\neverything that he wanted to convey (and we would then switch back to drawings)\nor I would be worried that there was an inconsistency creeping in or I just couldn’t\nkeep up, so the board sessions could be a roller coaster ride at times!\nRobin worked tremendously hard and consistently. He would write out and\nrewrite out his ideas, regularly circulating hand written documents. He would re-\nfine over and over his diagrams. Behind his achievements there was an impressive\nconsistency of e \u000bort.\nHe had a lot of confidence to carry on when the sledding was hard. He had\nsuch a strong intuition of what ought to be possible, that he was able to sustain\nyears of e \u000bort to get there.\nHe was generous with praise, with credit, with acknowledgement of others’\nideas. He was generous in sharing his own ideas and seemed delighted when oth-\ners would pick them up and carry them forward. I’ve always admired his openness\nand lack of jealousy in sharing ideas.\nIn his personal life, he seemed to have real compatibility with Lucy (his wife),\nwho also kept him grounded. I still laugh when I remember once working with\nhim at his dining room table and Lucy announcing, “Robin, enough of the mathe-\nmatics. It’s time to mow the lawn!”\nI visited Oxford for Lucy’s funeral and recall Robin putting a brave face on\nhis future plans; I returned a few weeks later when Robin passed away himself. I\nmiss him greatly.\nDavide: What research topics are you most interested in right now? How do you\nsee your work develop in the future?\nJames: I’ve been interested in a totally di \u000berent area, namely healthcare, for\nmany years. I’m fascinated by how patients, and information about them, flows\nthrough the complex human and machine interactions in hospital. When looking\nat how these flows work, and how they don’t, it’s possible to see where errors\narise, where blockages happen, where there are informational and visual deficits\nthat make the job of doctors and nurses di \u000ecult. I like to think visually in terms\nof graphs (incrementally adding detail) and physically moving through the space\nwhere the action happens—all inspired by Robin!\nInterview with Luca de Alfaro, Marco Faella, Tho-\nmas A. Henzinger, Rupak Majumdar and Mariëlle\nStoelinga\nIn what follows, “Luca A.” refers to Luca Aceto, whereas “Luca” is Luca de\nAlfaro.\nLuca A. and Mickael: You receive the CONCUR Test-of-Time Award 2022 for\nyour paper “The Element of Surprise in Timed Games,” which appeared at CON-\nCUR 20037. In that article, you studied concurrent, two-player timed games. A\nkey contribution of your paper is the definition of an elegant timed game model,\nallowing both the representation of moves that can take the opponent by surprise,\nas they are played “faster,” and the definition of natural concepts of winning con-\nditions for the two players—ensuring that players can win only by playing accord-\ning to a physically meaningful strategy. In our opinion, this is a great example of\nhow novel concepts and definitions can advance a research field. Could you tell\nus more about the origin of your model?\nAll authors: Mariëlle and Marco were postdocs with Luca at University of Cali-\nfornia, Santa Cruz, in that period, Rupak was a student of Tom’s, and we were all\nin close touch, meeting very often to work together. We all had worked much on\ngames, and an extension to timed games was natural for us to consider.\n7See https://pub.ist.ac.at/~tah/Publications/the_element_of_surprise_in_\ntimed_games.pdf) .\nIn untimed games, players propose a move, and the moves jointly determine\nthe next game state. In these games there is no notion of real-time. We wanted to\nstudy games in which players could decide not only the moves, but also the instant\nin time when to play them.\nIn timed automata, there is only one “player” (the automaton), which can take\neither a transition, or a time step. The natural generalization would be a game in\nwhich players could propose either a move, or a time step.\nYet, we were unsatisfied with this model. It seemed to us that it was di \u000berent\nto say “Let me wait 14 seconds and reconvene. Then, let me play my King of\nSpades” or “Let me play my King of Spades in 14 seconds.” In the first, by\nstopping after 14 seconds, the player is providing a warning that the card might\nbe played. In the second, there is no such warning. In other words, if players\npropose either a move or a time-step, they cannot take the adversary by surprise\nwith a move at an unanticipated instant. We wanted a model that could capture\nthis element of surprise.\nTo capture the element of surprise, we came up with a model in which players\npropose both a move and the delay with which it is played. After this natural in-\nsight, the di \u000eculty was to find the appropriate winning condition, so that a player\ncould not win by stopping time.\nTom: Besides the infinite state space (region construction etc.), a second issue\nthat is specific to timed systems is the divergence of time. Technically, divergence\nis a built-in Büchi condition (“there are infinitely many clock ticks”), so all safety\nand reachability questions about timed systems are really co-Büchi and Büchi\nquestions, respectively. This observation had been part of my work on timed\nsystems since the early 1990s, but it has particularly subtle consequences for timed\ngames, where no player (and no collaboration of players) should have the power\nto prevent time from diverging. This had to be kept in mind during the exploration\nof the modeling space.\nAll authors: We came up with many possible winning conditions, and for each\nwe identified some undesirable property, except for the one that we published.\nThis is in fact an aspect that did not receive enough attention in the paper; we\npresented the chosen winning condition, but we did not discuss in full detail why\nseveral other conditions that might have seemed plausible did not work.\nIn the process of analyzing the winning conditions, we came up with many\ninteresting games, which form the basis of many results, such as the result on lack\nof determinization, on the need for memory in reachability games (even when\nclock values are part of the state), and most famously as it gave the title to the\npaper, on the power of surprise.\nAfter this fun ride came the hard work, where we had to figure out how to\nsolve these games. We had worked at symbolic approaches to games before, and\nwe followed the approach here, but there were many complex technical adapta-\ntions required. When we look at the paper in the distance of time, it has this\ncombination of a natural game model, but also of a fairly sophisticated solution\nalgorithm.\nLuca A. and Mickael: Did any of your subsequent research build explicitly on\nthe results and the techniques you developed in your award-winning paper? If so,\nwhich of your subsequent results on (timed) games do you like best? Is there any\nresult obtained by other researchers that builds on your work and that you like in\nparticular or found surprising?\nLuca: Marco and I built Ticc, which was meant to be a tool for timed interface the-\nories, based largely on the insights in this paper. The idea was to be able to check\nthe compatibility of real-time systems, and automatically infer the requirements\nthat enable two system components to work well together—to be compatible in\ntime. We thought this would be useful for hardware or embedded systems, and es-\npecially for control systems, and in fact the application is important: there is now\nmuch successful work on the compositionality of StateFlow /Simulink models.\nWe used MTBDDs as the symbolic engine, and Marco and I invented a lan-\nguage for describing the components and we wrote by pair-programming some\nabsolutely beautiful Ocaml code that compiled real-time component models into\nMTBDDs (perhaps the nicest code I have ever written). The problem was that we\nwere too optimistic in our approach to state explosion, and we were never able to\nstudy any system of realistic size.\nAfter this, I became interested in games more in an economic setting, and from\nthere I veered into incentive systems, and from there to reputation systems and to\na three-year period in which I applied reputation systems in practice in industry,\nthus losing somewhat touch with formal methods work.\nMarco: I’ve kept working on games since the award-winning paper, in one way or\nanother. The closest I’ve come to the timed game setting has been with controller\nsynthesis games for hybrid automata. In a series of papers, we had fun designing\nand implementing symbolic algorithms that manipulate polyhedra to compute the\nwinning region of a linear hybrid game. The experience gained on timed games\nhelped me recognize the many subtleties arising in games played in real time on a\ncontinuous state-space.\nMariëlle: I have been working on games for test case generation: One player\nrepresents the tester, which chooses inputs to test; the other player represents the\nSystem-under-Test, and chooses the outputs of the system. Strategy synthesis\nalgorithms can then compute strategies for the tester that maximize all kinds of\nobjectives, e.g., reaching certain states, test coverage etc.\nA result that I really like is that we were able to show a very close correspon-\ndence between the existing testing frameworks and game theoretic frameworks:\nSpecifications act as game arenas; test cases are exactly game strategies, and the\nconformance relation used in testing (namely ioco) coincides with game refine-\nment (i.e., alternating refinement).\nRupak: In an interesting way, the first paper on games I read was the one by\nMaler, Pnueli and Sifakis (STACS 1995)8that had both fixpoint algorithms and\ntimed games (without “surprise”). So the problem of symbolic solutions to games\nand their applications in synthesis followed me throughout my career. I moved\nto finding controllers for games with more general (non-linear) dynamics, where\nwe worked on abstraction techniques. We also realized some new ways to look\nat restricted classes of adversaries. I was always fortunate to have very good\ncollaborators who kept my interest alive with new insights. Very recently, I have\ngotten interested in games from a more economic perspective, where players can\ntry to signal each other or persuade each other about private information but it’s\ntoo early to tell where this will lead.\nLuca A. and Mickael: What are the research topics that you find most interesting\nright now? Is there any specific problem in your current field of interest that you’d\nlike to see solved?\nMariëlle: Throughout my academic life, I have been working on stochastic anal-\nysis, with Luca and Marco, we worked on stochastic games a lot. First only on\ntheory, but later also on industrial applications, especially in the railroad and high-\ntech domain. At some point in time, I realized that my work was actually centred\naround analysing failure probabilities and risk. That is how I moved into risk\nanalysis; the o \u000ecial title of the chair I hold is Risk Management for High Tech\nSystems.\nThe nice thing is: this sells much better than Formal Methods! Almost nobody\nknows what Formal Methods are, and if they know, people think “yes, those di \u000e-\ncult people who urge us to specify everything mathematically.” For risk manage-\nment, this is completely di \u000berent: everybody understands that this is an important\narea.\nLuca: I am currently working on computational ecology, on machine learning\n(ML) for networks, and on fairness in data and ML. In computational ecology, we\nare working on the role of habitat and territory for species viability. We use ML\ntechniques to write “di \u000berentiable algorithms,” where we can compute the e \u000bect\nof each input, such as the kind of vegetation in each square-kilometer of territory,\non the output. If all goes well, this will enable us to e \u000eciently compute which\nregions should be prioritized for protection and habitat conservation.\n8Seehttps://www-verimag.imag.fr/~sifakis/RECH/Synth-MalerPnueli.pdf .\nIn networks, we have been able to show that reinforcement learning can yield\ntremendous throughput gains in wireless protocols, and we are now starting to\nwork on routing and congestion control.\nAnd in fairness and ML, we have worked on the automatic detection of anoma-\nlous data subgroups (something that can be useful in model diagnostics), and we\nare now working on the spontaneous inception of discriminatory behavior in agent\nsystems.\nWhile these do not really constitute a coherent research e \u000bort, I can certainly\nsay that I am having a grand tour of computer science—the kind of joy ride one\ncan a\u000bord with tenure!\nRupak: I have veered between practical and theoretical problems. I am working\non charting the decidability frontier for infinite-state model checking problems\n(most recently, for asynchronous programs and context-bounded reachability).\nI am also working on applying formal methods to the world of cyber-physical\nsystems—mostly games and synthesis. Finally, I have become very interested in\napplying formal methods to large scale industrial systems through a collaboration\nwith Amazon Web Services. There is still a large gap between what is theoretically\nunderstood and what is practically applicable to these systems; and the problems\nare a mix of technical and social.\nLuca A. and Mickael: You have a very strong track record in developing the-\noretical results and in applying them to real-life problems. In our, admittedly\nbiased, opinion, your work exemplifies Ben Schneiderman’s Twin-Win Model,\nwhich propounds the pursuit of “the dual goals of breakthrough theories in pub-\nlished papers and validated solutions that are ready for widespread dissemination.”\nCould you say a few words on your research philosophy? How do you see the in-\nterplay between basic and applied research?\nLuca: This is very kind for you to say, and a bit funny to hear, because certainly\nwhen I was young I had a particular talent for getting lost in useless theoretical\nproblems.\nI think two things played in my favor. One is that I am curious. The other is\nthat I have a practical streak: I still love writing code and tinkering with “things,”\nfrom IoT to biology to web and more. This tinkering was at the basis of many\nof the works I did. My work on reputation systems started when I created a wiki\non cooking; people were vandalizing it, and I started to think about game theory\nand incentives for collaboration, which led to my writing much of the code for\nWikipedia analysis, and at Google, for Maps edits analysis. My work on networks\nstarted with me tinkering with simple reinforcement-learning schemes that might\nwork, and writing the actual code. On the flip side, my curiosity too often had\nthe better of me, so that I have been unable to pay the continuous and devoted\nattention to a single research field. I am not a specialist in any single thing I do or\nI have done. I am always learning the ropes of something I don’t quite know yet\nhow to do.\nMy applied streak probably gave me some insight on which problems might\nbe of more practical relevance, and my frequent field changes have allowed me\nto bring new perspectives to old problems. There were not many people using\nreinforcement learning for wireless networks, there are not many who write ML\nand GPU code and also avidly read about conservation biology.\nRupak: I must say that Tom and Luca were very strong influencers for me in\nmy research: both in problem selection and in appreciating the joy of research. I\nremember one comment of Tom, paraphrased as “Life is short. We should write\npapers that get read.” I spent countless hours in Luca’s o \u000ece and learnt a lot of\nthings about research, co \u000bee, the ideal way to make pasta, and so on.\nMarco: It was an absolute privilege to be part of the group that wrote that paper\n(my 4th overall, according to DBLP). I’d like to thank my coauthors, and Luca in\nparticular, for guiding me during those crucially formative years.\nMariëlle: I fully agree!\nLuca A. and Mickael: Several of you have high-profile leadership roles at your\ninstitutions. What advice would you give to a colleague who is about to take up\nthe role of department chair, director of a research centre, dean or president of a\nuniversity? How can one build a strong research culture, stay research active and\nlive to tell the tale?\nLuca: My colleagues may have better advice; my productivity certainly decreased\nwhen I was department chair, and is lower even now that I am the vice-chair. When\nI was young, I was ambitious enough to think that my scientific work would have\nthe largest impact among the things I was doing. But I soon realized that some of\nthe greatest impact was on others: on my collaborators, on the students I advised,\nwho went on to build great careers and stayed friends, and on all the students I was\nteaching. This awareness serves to motivate and guide me in my administrative\nwork. The Computer Science department at University of California, Santa Cruz,\nis one of the ten largest in the number of students we graduate, and the time I\nspend on improving its organization and the quality of the education it delivers is\nsurely very impactful. My advice to colleagues is to consider their service not as\nan impediment to research, but as one of the most impactful things they do.\nMy way of staying alive is to fence o \u000bsome days that I only dedicate to re-\nsearch (aside from some unavoidable emergency), and also, to have collaborators\nthat give me such joy in working together that they brighten and energize my\nwhole day.\nLuca A. and Mickael: Finally, what advice would you give to a young researcher\nwho is keen to start working on topics related to concurrency theory today?\nLuca: Oh that sounds very interesting! And, may I show you this very interesting\nthing we are doing in Jax to model bird dispersal? We feed in this climate and\nvegetation data, and then we. . . .\nJust kidding. Just kidding. If I come to CONCUR I promise not to lead any of\nthe concurrency yearlings astray. At least I will try.\nMy main advice would be this: work on principles that allow correct-by-\ndesign development. If you look at programming languages and software engi-\nneering, the progress in software productivity has not happened because people\nhave become better at writing and debugging code written in machine language or\nC. It has happened because of the development of languages and software princi-\nples that make it easier to build large systems that are correct by construction. We\nneed the same kind of principles, (modeling) languages, and ideas to build correct\nconcurrent systems. Verification alone is not enough. Work on design tools, ideas\nto guide design, and design languages.\nTom: In concurrency theory we define formalisms and study their properties.\nMost papers do the studying, not the defining: they take a formalism that was de-\nfined previously, by themselves or by someone else, and study a property of that\nformalism, usually to answer a question that is inspired by some practical moti-\nvation. To me, this omits the most fun part of the exercise, the defining part. The\npoint I am trying to make is not that we need more formalisms, but that, if one\nwishes to study a specific question, it is best to study the question on the simplest\npossible formalism that exhibits exactly the features that make the question mean-\ningful. To do this, one often has to define that formalism. In other words, the\nformalism should follow the question, not the other way around. This principle\nhas served me well again and again and led to formalisms such as timed games,\nwhich try to capture the essence needed to study the power of timing in strate-\ngic games played on graphs. So my advice to a young researcher in concurrency\ntheory is: choose your formalism wisely and don’t be afraid to define it.\nRupak: Problems have di \u000berent measures. Some are practically justified (“Is this\npractically relevant in the near future?”) and some are justified by the foundations\nthey build (“Does this avenue provide new insights and tools?”). Di \u000berent com-\nmunities place di \u000berent values on the two. But both kinds of work are important\nand one should recognize that one set of values is not universally better than the\nother.\nMariëlle: As Michael Jordan puts it: “Just play. Have fun. Enjoy the game.”", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "q67drSyqNf", "year": null, "venue": "Bull. EATCS 2020", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/629/644", "forum_link": "https://openreview.net/forum?id=q67drSyqNf", "arxiv_id": null, "doi": null }
{ "title": "Money Transfer Made Simple: a Specification, a Generic Algorithm, and its Proof", "authors": [ "Alex Auvolat", "Davide Frey", "Michel Raynal", "François Taïani" ], "abstract": "It has recently been shown that, contrarily to a common belief, money transfer in the presence of faulty (Byzantine) processes does not require strong agreement such as consensus. This article goes one step further: namely, it first proposes a non-sequential specification of the money-transfer object, and then presents a generic algorithm based on a simple FIFO order between each pair of processes that implements it. The genericity dimension lies in the underlying reliable broadcast abstraction which must be suited to the appropriate failure model. Interestingly, whatever the failure model, the money transfer algorithm only requires adding a single sequence number to its messages as control information. Moreover, as a side effect of the pro- posed algorithm, it follows that money transfer is a weaker problem than the construction of a safe/regular/atomic read/write register in the asynchronous message-passing crash-prone model.", "keywords": [], "raw_extracted_content": "TheDistributed Computing Column\nby\nStefanSchmid\nUniversity of Vienna\nWähringer Strasse 29, AT - 1090 Vienna, Austria\[email protected]\nIn this issue of the distributed computing column, Alex Auvolat, Davide Frey,\nMichel Raynal, and Fran o ¸is Taïani revisit the basic problem of how to reliably\ntransfer money. Interestingly, the authors show that a simple algorithm is su \u000ecient\nto solve this problem, even in the presence of Byzantine processes.\nI would like to point out that this issue of the EATCS Bulletin (but in a di \u000ber-\nent section) further includes a summary of the PODC /DISC conference models\nproposed by the task force commissioned at the PODC 2020 business meeting, and\npresents and discusses the survey results. I hope it will be helpful and can serve as\na basis for further discussions on this topic. Note that this second article appears in\na dedicated section of the Bulletin, together with related articles.\nI would like to thank Alex and his co-authors as well as the PODC /DISC task\nforce for their contribution to the EATCS Bulletin. Special thanks go to everyone\nwho contributed to the conference model survey, also at the PODC business meeting\nand via Zulip.\nEnjoy the new distributed computing column!\nMoney Transfer Made Simple:\na Specification,\na Generic Algorithm, and its Proof\nAlex Auvolat\u0005;y, Davide Freyy, Michel Raynaly;?, François Taïaniy\n\u0005École Normale Supérieure, Paris, France\nyUniv Rennes, Inria, CNRS, IRISA, 35000 Rennes, France\n?Department of Computing, Polytechnic University, Hong Kong\nAbstract\nIt has recently been shown that, contrarily to a common belief, money\ntransfer in the presence of faulty (Byzantine) processes does not require\nstrong agreement such as consensus. This article goes one step further:\nnamely, it first proposes a non-sequential specification of the money-transfer\nobject, and then presents a generic algorithm based on a simple FIFO order\nbetween each pair of processes that implements it. The genericity dimension\nlies in the underlying reliable broadcast abstraction which must be suited to\nthe appropriate failure model. Interestingly, whatever the failure model, the\nmoney transfer algorithm only requires adding a single sequence number to\nits messages as control information. Moreover, as a side effect of the pro-\nposed algorithm, it follows that money transfer is a weaker problem than the\nconstruction of a safe/regular/atomic read/write register in the asynchronous\nmessage-passing crash-prone model.\nKeywords : Asynchronous message-passing system, Byzantine process, Dis-\ntributed computing, Efficiency, Fault tolerance, FIFO message order, Mod-\nularity, Money transfer, Process crash, Reliable broadcast, Simplicity.\n1 Introduction\nShort historical perspective Like field-area or interest-rate computations, money\ntransfers have had a long history (see e.g., [21, 27]). Roughly speaking, when\nlooking at money transfer in today’s digital era, the issue consists in building a\nsoftware object that associates an account with each user and provides two oper-\nations, one that allows a process to transfer money from one account to another\nand one that allows a process to read the current value of an account.\nThe main issue of money transfer lies in the fact that the transfer of an amount\nof money vby a user to another user is conditioned to the current value of the\nformer user’s account being at least v. A violation of this condition can lead\nto the problem of double spending (i.e., the use of the same money more than\nonce), which occurs in the presence of dishonest processes. Another important\nissue of money transfer resides in the privacy associated with money accounts.\nThis means that a full solution to money transfer must address two orthogonal\nissues: synchronization (to guarantee the consistency of the money accounts) and\nconfidentiality/security (usually solved with cryptography techniques). Here, like\nin closely related work [14], we focus on synchronization.\nFully decentralized electronic money transfer was introduced in [25] with the\nBitcoin cryptocurrency in which there is no central authority that controls the\nmoney exchanges issued by users. From a software point of view, Bitcoin adopts\na peer-to-peer approach, while from an application point of view it seems to have\nbeen motivated by the 2008 subprime crisis [32].\nTo attain its goal Bitcoin introduced a specific underlying distributed software\ntechnology called blockchain , which can be seen as a specific distributed state-\nmachine-replication technique, the aim of which is to provide its users with an\nobject known as a concurrent ledger . Such an object is defined by two operations,\none that appends a new item in such a way that, once added, the item cannot be re-\nmoved, and a second operation that atomically reads the full list of items currently\nappended. Hence, a ledger builds a total order on the invocations of its operations.\nWhen looking at the synchronization power provided by a ledger in the presence\nof failures, measured with the consensus-number lens, it has been shown that the\nsynchronization power of a ledger is +1[13, 30]. In a very interesting way, re-\ncent work [14] has shown that, in a context where each account has a single owner\nwho can spend the money currently in his/her account, the consensus number of\nthemoney-transfer concurrent object is 1. An owner is represented by a process\nin the following.\nThis is an important result, as it shows that the power of blockchain tech-\nnology is much stronger (and consequently more costly) than necessary to im-\nplement money transfer1. To illustrate this discrepancy, considering a sequential\nspecification of the money transfer object, the authors of [14] show first that, in\na failure-prone shared-memory system, money transfer can be implemented on\ntop of a snapshot object [1] (whose consensus number is 1, and consequently\n1As far as we know, the fact that consensus is not necessary to implement money transfer was\nstated for the first time in [15].\ncan be implemented on top of read/write atomic registers). Then, they appropri-\nately modify their shared-memory algorithm to obtain an algorithm that works\nin asynchronous failure-prone message-passing systems. To allow the processes\nto correctly validate the money transfers, the resulting algorithm demands them\nto capture the causality relation linking money transfers and requires each mes-\nsage to carry control information encoding the causal past of the money transfer\nit carries.\nContent of the article The present article goes even further. It first presents a\nnon-sequential specification of the money transfer object2, and then shows that,\ncontrarily to what is currently accepted, the implementation of a money transfer\nobject does not require the explicit capture of the causality relation linking individ-\nual money transfers. To this end, we present a surprisingly simple yet efficient and\ngeneric money-transfer algorithm that relies on an underlying reliable-broadcast\nabstraction. It is efficient as it only requires a very small amount of meta-data in\nits messages: in addition to money-transfer data, the only control information car-\nried by the messages of our algorithm is reduced to a single sequence number. It is\ngeneric in the sense that it can accommodate different failure models with no mod-\nification . More precisely, our algorithm inherits the fault-tolerance properties of\nits underlying reliable broadcast: it tolerates crashes if used with a crash-tolerant\nreliable broadcast, and Byzantine faults if used with a Byzantine-tolerant reliable\nbroadcast.\nGiven an n-process system where at most tprocesses can be faulty, the pro-\nposed algorithm works for t<nin the crash failure model, and t<n=3in the\nByzantine failure model. This has an interesting side effect on the distributed\ncomputability side. Namely, in the crash failure model, money transfer consti-\ntutes a weaker problem than the construction of a safe/regular/atomic read/write\nregister (where “weaker” means that—unlike a read/write register—it does not\nrequire the “majority of non-faulty processes” assumption).\nRoadmap The article consists of 7 sections. First, Section 2 introduces the dis-\ntributed failure-prone computing models in which we are interested, and Section 3\nprovides a definition of money transfer suited to these computing models. Then,\nSection 4 presents a very simple generic money-transfer algorithm. Its instanti-\nations and the associated proofs are presented in Section 5 for the crash failure\nmodel and in Section 6 for the Byzantine failure model. Finally, Section 7 con-\ncludes the article.3\n2To our knowledge, this is the first non-sequential specification of the money transfer object\nproposed so far.\n3Let us note that similar ideas have been developed concomitantly and independently in [10],\nwhich presents a money transfer system and its experimental evaluation.\n2 Distributed Computing Models\n2.1 Process failure model\nProcess model The system comprises a set of nsequential asynchronous pro-\ncesses, denoted p1, ...,pn4. Sequential means that a process invokes one operation\nat a time, and asynchronous means that each process proceeds at its own speed,\nwhich can vary arbitrarily and always remains unknown to the other processes.\nTwo process failure models are considered. The model parameter tdenotes\nan upper bound on the number of processes that can be faulty in the considered\nmodel. Given an execution r(run) a process that commits failures in ris said to\nbe faulty in r, otherwise it is non-faulty (or correct) in r.\nCrash failure model In this model, processes may crash. A crash is a premature\ndefinitive halt. This means that, in the crash failure model, a process behaves\ncorrectly (i.e., executes its algorithm) until it possibly crashes. This model is\ndenoted CAMP n;t[;](Crash Asynchronous Message Passing ). When tis restricted\nnot to bypass a bound f(n), the corresponding restricted failure model is denoted\nCAMP n;t[t\u0014f(n)].\nByzantine failure model In this model, processes can commit Byzantine fail-\nures [23, 28], and those that do so are said to be Byzantine. A Byzantine failure\noccurs when a process does not follow its algorithm. Hence a Byzantine process\ncan stop prematurely, send erroneous messages, send different messages to dis-\ntinct processes when it is assumed to send the same message, etc. Let us also\nobserve that, while a Byzantine process can invoke an operation which generates\napplication messages5it can also “simulate” this operation by sending fake im-\nplementation messages that give their receivers the illusion that they have been\ngenerated by a correct sender. However, we assume that there is no Sybil attack\nlike most previous work on byzantine fault tolerance including [14].6\nAs previously, the notations BAMP n;t[;]andBAMP n;t[t\u0014f(n)](Byzantine\nAsynchronous Message Passing ) are used to refer to the corresponding Byzantine\nfailure models.\n4Hence the system we consider is static (according to the distributed computing community\nparlance) or permissioned (according to the blockchain community parlance).\n5Anapplication message is a message sent at the application level, while an implementation is\nlow level message used to ensure the correct delivery of an application message.\n6As an example, a Byzantine process can neither spawn new identities, nor assume the identity\nof existing processes.\n2.2 Underlying complete point-to-point network\nThe set of processes communicate through an underlying message-passing point-\nto-point network in which there exists a bidirectional channel between any pair\nof processes. Hence, when a process receives a message, it knows which process\nsent this message. For simplicity, in writing the algorithms, we assume that a\nprocess can send messages to itself.\nEach channel is reliable and asynchronous. Reliable means that a channel\ndoes not lose, duplicate, or corrupt messages. Asynchronous means that the tran-\nsit delay of each message is finite but arbitrary. Moreover, in the case of the\nByzantine failure model, a Byzantine process can read the content of the mes-\nsages exchanged through the channels, but cannot modify their content.\nTo make our algorithm as generic and simple as possible, Section 4 does not\npresent it in terms of low-level send/receive operations7but in terms of a high-\nlevel communication abstraction, called reliable broadcast (e.g., [7, 9, 16, 19,\n30]). The definition of this communication abstraction appears in Section 5 for the\ncrash failure model and Section 6 for the Byzantine failure model. It is important\nto note that the previously cited reliable broadcast algorithms do not use sequence\nnumbers. They only use different types of implementation messages which can\nbe encoded with two bits.\n3 Money Transfer: a Formal Definition\nMoney transfer: operations From an abstract point of view, a money-transfer\nobject can be seen as an abstract array ACCOUNT [1::n]where ACCOUNT [i]rep-\nresents the current value of pi’s account. This object provides the processes with\ntwo operations denoted balance ()and transfer (), whose semantics are defined\nbelow. The transfer by a process of the amount of money vto a process pjis\nrepresented by the pair hj;vi. Without loss of generality, we assume that a process\ndoes not transfer money to itself. It is assumed that each ACCOUNT [i]is initial-\nized to a non-negative value denoted init [i]. It is assumed the array init [1::n]\nis initially known by all the processes.8\nInformally, when piinvokes balance (j)it obtains a value (as defined below)\nofACCOUNT [j], and when it invokes the transfer hj;vi, the amount of money\nvis moved from ACCOUNT [i]toACCOUNT [j]. If the transfer succeeds, the\noperation returns commit , if it fails it returns abort .\n7Actually the send and receive operations can be seen as “machine-level” instructions provided\nby the network.\n8It is possible to initialize some accounts to negative values. In this case, we must assume\npos>neg, where pos(resp., neg) is the sum of all the positive (resp., negative) initial values.\nHistories The following notations and definitions are inspired from [2].\n\u000fA local execution history (or local history) of a process pi, denoted Li, is a\nsequence of operations balance ()andtransfer ()issued by pi. If an opera-\ntion op1precedes an operation op2inLi, we say that “ op1precedes op2in\nprocess order”, which is denoted op1!iop2.\n\u000fAn execution history (or history) His a set of nlocal histories, one per\nprocess, H=(L1;\u0001\u0001\u0001;Ln).\n\u000fA serialization Sof a history His a sequence that contains all the operations\nofHand respects the process order !iof each process pi.\n\u000fGiven a history Hand a process pi, letAi;T(H)denote the history (L0\n1;:::;L0\nn)\nsuch that\n–L0\ni=Li, and\n–For any j,i:L0\njcontains only the transfer operations of pj.\nNotations\n\u000fAn operation transfer (j;v)invoked by piis denoted trfi(j;v).\n\u000fAn invocation of balance (j)that returns the value vis denoted blc(j)=v.\n\u000fLetHbe a set of operations.\n–plus(j;H)= \u0006 trfk(j;v)2Hv(total of the money given to pjinH).\n–minus (j;H)= \u0006 trfj(k;v)2Hv(total of the money given by pjinH).\n–acc(j;H)=init [j]+plus(j;H)\u0000minus (j;H)(value of ACCOUNT [j]\naccording to H).\n\u000fGiven a history Hand a process pi, let Sibe a serialization of Ai;T(H)\n(hence, Sirespects the nprocess orders defined by H). Let!Sidenote\nthe total order defined by Si.\nMoney-transfer-compliant serialization A serialization SiofAi;T(H)is money-\ntransfer compliant (MT-compliant) if:\n\u000fFor any operation trfj(k;v)2Si, we have\nv\u0014acc(j;fop2Sijop!Sitrfj(k;v)g), and\n\u000fFor any operation blc(j)=v2Si, we have\nv=acc(j;fop2Sijop!Siblc(j)=vg).\nMT-compliance is the key concept at the basis of the definition of a money-transfer\nobject. It states that it is possible to associate each process piwith a total order Si\nin which (a) each of its invocations of balance (j)returns a value vequal to pj’s\naccount’s current value according to Si, and (b) processes transfer only money\nthat they have.\nLet us observe that the common point among the serializations S1, ...,Snlies\nin the fact that each process sees all the transfer operations of any other process pj\nin the order they have been produced (as defined by Lj), and sees its own transfer\nand balance operations in the order it produced them (as defined by Li).\nMoney transfer in CAMP n;t[;]Considering the CAMP n;t[;]model, a money-\ntransfer object is an object that provides the processes with balance ()andtransfer ()\noperations and is such that, for each of its executions, represented by the corre-\nsponding history H, we have:\n\u000fAll the operations invoked by correct processes terminate.\n\u000fFor any correct process pi, there is an MT-compliant serialization Siof\nAi;T(H), and\n\u000fFor any faulty process pi, there is a history H0=(L0\n1;:::;L0\nn)where (a) L0\njis\na prefix of Ljfor any j,i, and (b) L0\ni=Li, and there is an MT-compliant\nserialization of Ai;T(H0).\nAn algorithm implementing a money transfer object is correct in CAMP n;t[;]if\nit produces only executions as defined above. We then say that the algorithm is\nMT-compliant.\nMoney transfer in BAMP n;t[;]The main differences between money transfer in\nCAMP n;t[;]andBAMP n;t[;]lies in the fact that a faulty process can try to transfer\nmoney it does not have, and try to present different behaviors with respect to\ndifferent correct processes. This means that, while the notion of a local history Li\nis still meaningful for a non-Byzantine process, it is not for a Byzantine process.\nFor a Byzantine process, we therefore define a mock local history for a process pi\nas any sequence of transfer operations from pi’s account9. In this definition, the\nmock local history Liassociated with a Byzantine process piis not necessarily the\nlocal history it produced, it is only a history that it could have produced from the\npoint of view of the correct processes. The correct processes implement a money-\ntransfer object if they all behave in a manner consistent with the same set of mock\nlocal histories for the Byzantine processes. More precisely, we define a mock\nhistory associated with an execution on a money transfer object in BAMP n;t[;]as\n˜H=(˜L1;:::;˜Ln)where:\n˜Lj=8>><>>:Ljifpjis correct,\namock local history ifpjis Byzantine.\n9Let us remind that the operations balance ()issued by a Byzantine can return any value. So\nthey are not considered in the mock histories associated with Byzantine processes.\nConsidering the BAMP n;t[;]model, a money transfer object is such that, for each\nof its executions, there exists a mock history ˜Hsuch that for any correct process pi,\nthere is an MT-compliant serialization SiofAi;T(˜H). An algorithm implementing\nsuch executions is said to be MT-compliant.\nConcurrent vs sequential specification Let us notice that the previous spec-\nification considers money transfer as a concurrent object. More precisely and\ndifferently from previous specifications of the money transfer object, it does not\nconsider it as a sequential object for which processes must agree on the very\nsame total order on the operations they issue [17]. As a simple example, let us\nconsider two processes piandpjthat independently issue the transfers trfi(k;v)\nandtrfj(k;v0)respectively. The proposed specification allows these transfers (and\nmany others) to be seen in different order by different processes. As far as we\nknow, this is the first specification of money transfer as a non-sequential object.\n4 A Simple Generic Money Transfer Algorithm\nThis section presents a generic algorithm implementing a money transfer object.\nAs already said, its generic dimension lies in the underlying reliable broadcast\nabstraction used to disseminate money transfers to the processes, which depends\non the failure model.\n4.1 Reliable broadcast\nReliable broadcast provides two operations denoted r_broadcast ()andr_deliver ().\nBecause a process is assumed to invoke reliable broadcast each time it issues a\nmoney transfer, we use a multi-shot reliable broadcast, that relies on explicit se-\nquence numbers to distinguish between its different instances (more on this be-\nlow). Following the parlance of [16] we use the following terminology: when a\nprocess invokes r_broadcast (sn;m), we say it “r-broadcasts the message mwith\nsequence number sn”, and when its invocation of r_deliver ()returns it a pair\n(sn;m), we say it “r-delivers mwith sequence number sn”. While definitions of re-\nliable broadcast suited to the crash failure model and the Byzantine failure model\nwill be given in Section 5 and Section 6, respectively, we state their common\nproperties below.\n\u000fValidity. This property states that there is no message creation. To this end,\nit relates the outputs (r-deliveries) to the inputs (r-broadcasts). Excluding\nmalicious behaviors, a message that is r-delivered has been r-broadcast.\n\u000fIntegrity. This property states that there is no message duplication.\n\u000fTermination-1. This property states that correct processes r-deliver what\nthey broadcast.\n\u000fTermination-2. This property relates the sets of messages r-delivered by\ndifferent processes.\nThe Termination properties ensure that all the correct processes r-deliver the same\nset of messages, and that this set includes at least all the messages that they r-\nbroadcast.\nAs mentioned above, sequence numbers are used to identify different instances\nof the reliable broadcast. Instead of using an underlying FIFO-reliable broadcast\nin which sequence numbers would be hidden, we expose them in the input/output\nparameters of the r_broadcast ()and r_deliver ()operations, and handle their up-\ndates explicitly in our generic algorithm. This reification10allows us to capture\nexplicitly the complete control related to message r-deliveries required by our al-\ngorithm. As we will see, it follows that the instantiations of the previous Integrity\nproperty (crash and Byzantine models) will explicitly refer to “upper layer” se-\nquence numbers.\nWe insist on the fact that the reliable broadcast abstraction that the proposed\nalgorithm depends on does not itself provide the FIFO ordering guarantee. It only\nuses sequence numbers to identify the different messages sent by a process. As\nexplained in the next section, the proposed generic algorithm implements itself\nthe required FIFO ordering property.\n4.2 Generic money transfer algorithm: local data structures\nAs said in the previous section, init [1::n]is an array of constants, known by all\nthe processes, such that init [k]is the initial value of pk’s account, and a transfer\nof the quantity vfrom a process pito a process pkis represented by the pair hk;vi.\nEach process pimanages the following local variables:\n\u000fsni: integer variable, initialized to 0, used to generate the sequence numbers\nassociated with the transfers issued by pi(it is important to notice that the\npoint-to-point FIFO order realized with the sequence numbers is the only\n“causality-related” control information used in the algorithm).\n\u000fdeli[1::n]: array initialized to [0;\u0001\u0001\u0001;0]such that deli[j]is the sequence\nnumber of the last transfer issued by pjand locally processed by pi.\n\u000faccount i[1::n]: array, initialized to init [1::n], that is a local approximate\nrepresentation of the abstract array ACCOUNT [1::n], i.e., account i[j]is the\nvalue of pj’s account, as known by pi.\n10Reification is the process by which an implicit, hidden or internal information is explicitly\nexposed to a programmer.\nWhile other local variables containing bookkeeping information can be added\naccording to the application’s needs, it is important to insist on the fact that the\nproposed algorithm needs only the three previous local variables (i.e., (2n+1)local\nregisters) to solve the synchronization issues that arise in fault-tolerant money\ntransfer.\n4.3 Generic money transfer algorithm: behavior of a process pi\nAlgorithm 1 describes the behavior of a process pi. When it invokes balance i(j),\npireturns the current value of account i[j](line 1).\ninit:account i[1::n] init [1::n];sni 0;deli[1::n] [0;\u0001\u0001\u0001;0].\noperation balance (j)is\n(1) return (account [j]).\noperation transfer (j;v)is\n(2) if(v\u0014account i[i])\n(3) then sni sni+1;r_broadcast (sni,TRANSFERhj;vi);\n(4) wait(deli[i]=sni);return (commit )\n(5) else return (abort )\n(6) end if .\nwhen (sn;TRANSFERhk;vi)isr_delivered from pjdo\n(7) wait\u0000(sn=deli[j]+1)^(account i[j]\u0015v)\u0001;\n(8) account i[j] account i[j]\u0000v;account i[k] account i[k]+v;\n(9) deli[j] sn.\nAlgorithm 1: Generic broadcast-based money transfer algorithm (code for pi)\nWhen it invokes transfer (j;v),pifirst checks if it has enough money in its\naccount (line 2) and returns abort if it does not (line 5). If process pihas enough\nmoney, it computes the next sequence number sniand r-broadcasts the pair (sni;\nTRANSFERhj;vi)(line 3). Then piwaits until it has locally processed this transfer\n(lines 7-9), and finally returns commit . Let us notice that the predicate at line 7 is\nalways satisfied when pir-delivers a transfer message it has r-broadcast.\nWhen pir-delivers a pair (sn;TRANSFERhk;vi)from a process pj, it does not\nprocess it immediately. Instead, piwaits until (i) this is the next message it has\nto process from pj(to implement FIFO ordering) and (ii) its local view of the\nmoney transfers to and from pj(namely the current value of account i[j]) allows\nthis money transfer to occur (line 7). When this happens, pilocally registers the\ntransfer by moving the quantity vfrom account i[j]toaccount i[k](line 8) and\nincreases deli[j](line 9).\n5 Crash Failure Model: Instantiation and Proof\nThis section presents first the crash-tolerant reliable broadcast abstraction whose\noperations instantiate the r_broadcast ()and r_deliver ()operations used in the\ngeneric algorithm. Then, using the MT-compliance notion, it proves that Algo-\nrithm 1 combined with a crash-tolerant reliable broadcast implements a money\ntransfer object in CAMP n;t[;]. It also shows that, in this model, money transfer is\nweaker than the construction of an atomic read/write register. Finally, it presents\na simple weakening of the FIFO requirement that works in the CAMP n;t[;]model.\n5.1 Multi-shot reliable broadcast abstraction in CAMP n;t[;]\nThis communication abstraction, named CR-Broadcast, is defined by the two op-\nerations cr_broadcast ()andcr_deliver (). Hence, we use the terminology “to cr-\nbroadcast a message”, and “to cr-deliver a message”.\n\u000fCRB-Validity. If a process picr-delivers a message with sequence number\nsnfrom a process pj, then pjcr-broadcast it with sequence number sn.\n\u000fCRB-Integrity. For each sequence number snand sender pja process pi\ncr-delivers at most one message with sequence number snfrom pj.\n\u000fCRB-Termination-1. If a correct process cr-broadcasts a message, it cr-\ndelivers it.\n\u000fCRB-Termination-2. If a process cr-delivers a message from a (correct or\nfaulty) process pj, then all correct processes cr-deliver it.\nCRB-Termination-1 and CRB-Termination-2 capture the “strong” reliability prop-\nerty of CR-Broadcast, namely: all the correct processes cr-deliver the same set S\nof messages, and this set includes at least the messages they cr-broadcast. More-\nover, a faulty process cr-delivers a subset of S. Algorithms implementing the\nCR-Broadcast abstraction in CAMP n;t[;]are described in [16, 30].\n5.2 Proof of the algorithm in CAMP n;t[;]\nLemma 1. Any invocation of balance ()ortransfer ()issued by a correct process\nterminates.\nProof The fact that any invocation of balance ()terminates follows immediately\nfrom the code of the operation.\nWhen a process piinvokes transfer (j;v), it r-broadcasts a message and, due\nto the CRB-Termination properties, pireceives its own transfer message and the\npredicate (line 7) is necessarily satisfied. This is because (i) only pican transfer\nits own money, (ii) the wait statement of line 4 ensures the current invocation\noftransfer (j;v)does not return until the corresponding TRANSFER message is\nprocessed at lines 8-9, and (iii) the fact that account i[i]cannot decrease between\nthe execution of line 3 and the one of line 7. It follows that piterminates its\ninvocation of transfer (j;v). \u0003Lemma 1\nThe safety proof is more involved. It consists in showing that any execution satis-\nfies MT-compliance as defined in Section 3.\nNotation and definition\n\u000fLettrfsn\nj(k;v)denote the operation trf(k;v)issued by pjwith sequence num-\nbersn.\n\u000fWe say a process piprocesses the transfer trfsn\nj(k;v)if, after it cr-delivered\nthe associated message TRANSFERhk;viwith sequence number sn,pjex-\nits the wait statement at line 7 and executes the associated statements at\nlines 8-9. The moment at which these lines are executed is referred to as the\nmoment when the transfer is processed bypi. (These notions are related to\nthe progress of processes.)\n\u000fIf the message TRANSFER cr-broadcast by a process is cr-delivered by a\ncorrect process, we say that the transfer is successful . (Let us notice that a\nmessage cr-broadcast by a correct process is always successful.)\nLemma 2. If a process piprocesses trfsn\n`(k;v), then any correct process pro-\ncesses it.\nProof Letm1;m2;:::be the sequence of transfers processed by piand let pjbe\na correct process. We show by induction on zthat, for all z,pjprocesses all the\nmessages m1;m2;:::;mz.\nBase case z=0. As the sequence of transfers is empty, the proposition is\ntrivially satisfied.\nInduction. Taking z\u00150, suppose pjprocessed all the transfers m1;m2;:::;mz.\nWe have to show that pjprocesses mz+1. Note that m1;m2;:::;mzdo not typically\noriginate from the same sender, and are therefore normally processed by pjin a\ndifferent order than pi, possibly mixed with other messages. This also applies to\nmz+1. Ifmz+1was processed by pjbefore mz, we are done. Otherwise there is a\ntime\u001cat which pjprocessed all the transfers m1;m2;:::;mz(case assumption),\ncr-delivered mz+1(CBR-Termination-2 property), but has not yet processed mz+1.\nLetmz+1=trfsn\n`(k;v). At time\u001c, we have the following.\n\u000fOn one side, del j[`]\u0014sn\u00001since messages are processed in FIFO order\nandmz+1has not yet been processed. On the other side, del j[`]\u0015sn\u00001\nbecause either sn=1ortrfsn\u00001\n`(\u0000;\u0000)2m1;:::;mz, where trfsn\u00001\n`(\u0000;\u0000)is the\ntransfer issued by p`just before mz+1=trfsn\n`(k;v)(otherwise piwould not\nhave processed mz+1just after m1;:::;mz). Thus del j[`]=sn\u00001.\n\u000fLet us now shown that, at time \u001c,account j[`]\u0015v. To this end let plusz+1\ni(`)\ndenote the money transferred to p`as seen by pijust before piprocesses\nmz+1, and minusz+1\ni(`)denote the money transferred from p`as seen by pi\njust before piprocesses mz+1. Similarly, let plusz+1\nj(`)denote the money\ntransferred to p`as seen by pjat time\u001candminusz+1\nj(`)denote the money\ntransferred from p`as seen by pjat time\u001c. Let us consider the following\nsums:\n–On the side of the money transferred to p`as seen by pj. Due to induc-\ntion, all the transfers to p`included in m1;m2;:::; mz(and possibly\nmore transfers to p`) have been processed by pj, thus plusz+1\nj(`)\u0015\n\u0006trfk0(`;w)2fm1;m2;:::;mzgwand, as piprocessed the messages in the order\nm1;:::;mz;mz+1(assumption), we have plusz+1\ni(`)= \u0006 trfk0(`;w)2fm1;m2;:::;mzgw.\nHence, plusz+1\nj(`)\u0015plusz+1\ni(`).\n–On the side of the money transferred from p`as seen by pj. Let\nus observe that pjhas processed all the transfers from p`with a se-\nquence number smaller than snand no transfer from p`with a se-\nquence number greater than or equal to sn, thus we have minusz+1\nj(`)=\n\u0006trf`(k0;w)2fm1;m2;:::;mzgw=minusz+1\ni(`).\nLetaccountz+1\ni[`]be the value of account i[`]just before piprocesses mz+1,\nandaccountz+1\nj[`]be the value of account j[`]at time\u001c. Asaccountz+1\nj[`]=\ninit [`]+plusz+1\nj(`)\u0000minusz+1\nj(`)andaccountz+1\ni[`]=init [`]+plusz+1\ni(`)\u0000\nminusz+1\ni(`), it follows that account j[`]is greater than or equal to the value\nofaccount i[`]just before piprocesses mz+1, which was itself greater than\nor equal to v(otherwise piwould not have processed mz+1at that time). It\nfollows that account j[`]\u0015v.\nThe two predicates of line 7 are therefore satisfied, and will remain so until mz+1\nis processed (due to the FIFO order on transfers issued by p`), thus ensuring that\nprocess pjprocesses the transfer mz+1. \u0003Lemma 2\nLemma 3. If a process piissues a successful money transfer trfsn\ni(k;v)(i.e., it cr-\nbroadcasts it in line 3), any correct process eventually cr-delivers and processes it.\nProof When process picr-broadcast money transfer trfsn\ni(k;v), the local predicate\n(sn=deli[i]+1)^(account i[i]\u0015v)was true at pi. When picr-delivers its own\ntransfer message, the predicate is still true at line 7 and piprocesses its transfer\n(ifpicrashes after having cr-broadcast the transfer and before processing it, we\nextend its execution—without loss of correctness—by assuming it crashed just\nafter processing the transfer). It follows from Lemma 2 that any correct process\nprocesses trfsn\ni(k;v). \u0003Lemma 3\nTheorem 1. Algorithm 1instantiated with CR-Broadcast implements a money\ntransfer object in the CAMP n;t[;]system model, and ensures that all operations\nby correct processes terminate.\nProof Lemma 1 proved that the invocations of the operations balance ()and\ntransfer ()by the correct processes terminate. Let us now consider MT-compliance.\nConsidering any execution of the algorithm, captured as history H=(L1;:::;Ln),\nlet us first consider a correct process pi. Let Sibe the sequence of the following\nevents happening at pi(these events are “instantaneous” in the sense piis not\ninterrupted when it produces each of them):\n\u000fthe event blc(j)=voccurs when piinvokes balance (j)and obtains v(line 1),\n\u000fand the event trfsn\nj(k;v)occurs when piprocesses the corresponding transfer\n(lines 8-9 executed without interruption).\nWe show that Siis an MT-compliant serialization of Ai;T(H). When considering\nthe construction of Si, we have the following:\n\u000fFor all trfsn\nj(k;v)2Ljwe have that pjcr-broadcast this transfer and that\n(sn;TRANSFERhk;vi)was received by pjand was therefore successful : it\nfollows from Lemma 3 that piprocesses this money transfer, and conse-\nquently we have trfsn\nj(k;v)2Si.\n\u000fFor all op1=trfsn\nj(k;v)andop2=trfsn0\nj(k0;v0)inSi(two transfers issued by\npj) such that op1!jop2, we have sn<sn0. Consequently piprocesses\nop1before op2, and we have op1!Siop2.\n\u000fFor all pairs op1and op2belonging to Li, their serialization order is the\nsame in LiandSi.\nIt follows that Siis a serialization of Ai;T(H). Let us now show that Siis MT-\ncompliant.\n\u000fCase where the event in Siistrfsn\nj(k;v). In this case we have v\u0014acc(j;fop2\nSijop!Sitrfj(k;v)gbecause this condition is directly encoded at piin the\nwaiting predicate that precedes the processing of op.\n\u000fCase where the event in Siisblc(j)=v. In this case we have v=acc(j;fop2\nSijop!Siblc(j)=vg, because this is exactly the way how the returned value\nvis computed in the algorithm.\nThis terminates the proof for the correct processes.\nFor a process pithat crashes, the sequence of money transfers from a process\npjthat is processed by piis a prefix of the sequence of money transfers issued by\npj(this follows from the FIFO processing order, line 7). Hence, for each process\npithat crashes there is a history H0=(L0\n1;:::;L0\nn)where L0\njis a prefix of Ljfor\neach j,iandL0\ni=Li, such that, following the same reasoning, the construction\nSigiven above is an MT-compliant serialization of Ai;T(H0), which concludes the\nproof of the theorem. \u0003Theorem 1\n5.3 Money transfer vs read/write registers in CAMP n;t[;]\nIt is shown in [5] that it is impossible to implement an atomic read/write register\nin the distributed system model CAMP n;t[;], i.e., when, in addition to asynchrony,\nany number of processes may crash. On the positive side, several algorithms\nimplementing such a register in CAMP n;t[t<n=2]have been proposed, each with\nits own features (see for example [4, 5, 24] to cite a few). An atomic read/write\nregister can be built from safe or regular registers11[22, 29, 33]. Hence, as atomic\nregisters, safe and regular registers cannot be built in CAMP n;t[;](although they\ncan in CAMP n;t[t<n=2]). As CAMP n;t[t<n=2]is a more constrained model\nthan CAMP n;t[;], it follows that, from a CAMP n;tcomputability point of view, the\nconstruction of a safe/regular/atomic read/write register is a stronger problem than\nmoney transfer.\n5.4 Replacing FIFO by a weaker ordering in CAMP n;t[;]\nAn interesting question is the following one: is FIFO ordering necessary to im-\nplement money transfer in the CAMP n;t[;]model? While we conjecture it is, it\nappears that, a small change in the specification of money transfer allows us to\nuse a weakened FIFO order, as shown below.\nWeakened money transfer specification The change in the specification pre-\nsented in Section 3 concerns the definition of the serialisation Siassociated with\neach process pi. In this modified version the serialization Siassociated with each\nprocess piis no longer required to respect the process order on the operations is-\nsued by pj,j,i. This means that two different process piandpkmay observe the\ntransfer ()operations issued by a process pjin different orders (which captures the\nfact that some transfer operations by a process pjare commutative with respect to\nits current account).\n11Safe and regular registers were introduced introduced in [22]. They have weaker specifications\nthan atomic registers.\nModification of the algorithm Letkbe a constant integer \u00151. Let sni(j)be\nthe highest sequence number such that all the transfer messages from pjwhose\nsequence numbers belong to f1;\u0001\u0001\u0001;:sni(j)ghave been cr-delivered and processed\nby a certain process pi(i.e., lines 8-9 have been executed for these messages).\nInitially we have sni(j)=0.\nLetsnbe the sequence number of a message cr-delivered by pifrom pj. At\nline 7 the predicate sn=deli[j]+1can be replaced by the predicate sn2fsni(j)+\n1;\u0001\u0001\u0001;sni(j)+kg. Let us notice that this predicate boils down to sn=deli[j]+1\nwhen k=1. More generally the set of sequence numbers fsni(j)+1;\u0001\u0001\u0001;sni(j)+kg\ndefines a sliding window for sequence numbers which allows the corresponding\nmessages to be processed.\nThe important point here is the fact that messages can be processed in an order\nthat does not respect their sending order as long as all the messages are processed,\nwhich is not guaranteed when k= +1. Assuming pjissues an infinite number of\ntransfers, if k= +1it is possible that, while all these messages are cr-delivered by\npi, some of them are never processed at lines 8-9 (their processing being always\ndelayed by other messages that arrive after them). The finiteness of the value k\nprevents this unfair message-processing order from occurring.\nThe proof of Section 5.2 must be appropriately adapted to show that this mod-\nification implements the weakened money-transfer specification.\n6 Byzantine Failure Model: Instantiation and Proof\nThis section presents first the reliable broadcast abstraction whose operations in-\nstantiate the r_broadcast ()and r_deliver ()operations used in the generic algo-\nrithm. Then, it proves that the resulting algorithm correctly implements a money\ntransfer object in BAMP n;t[t<n=3].\n6.1 Reliable broadcast abstraction in BAMP n;t[t<n=3]\nThe communication abstraction, denoted BR-Broadcast, was introduced in [7]. It\nis defined by two operations denoted br_broadcast ()andbr_deliver ()(hence we\nuse the terminology “br-broadcast a message” and “br-deliver a message”). The\ndifference between this communication abstraction and CR-Broadcast lies in the\nnature of failures. Namely, as a Byzantine process can behave arbitrarily, CRB-\nValidity, CRB-Integrity, and CRB-Termination-2 cannot be ensured. As an exam-\nple, it is not possible to ensure that if a Byzantine process br-delivers a message,\nall correct processes br-deliver it. BR-Broadcast is consequently defined by the\nfollowing properties. Termination-1 is the same in both communication abstrac-\ntions, while Integrity, Validity and Termination-2 consider only correct processes\n(the difference lies in the added constraint written in italics).\n\u000fBRB-Validity. If a correct process pibr-delivers a message from a correct\nprocess pjwith sequence number sn, then pjbr-broadcast it with sequence\nnumber sn.\n\u000fBRB-Integrity. For each sequence number snand sender pjacorrect pro-\ncess pibr-delivers at most one message with sequence number snfrom\nsender pj.\n\u000fBRB-Termination-1. If a correct process br-broadcasts a message, it br-\ndelivers it.\n\u000fBRB-Termination-2. If a correct process br-delivers a message from a (cor-\nrect or faulty) process pj, then all correct processes br-deliver it.\nIt is shown in [8, 30] that t<n=3is a necessary requirement to implement\nBR-Broadcast. Several algorithms implementing this abstraction have been pro-\nposed. Among them, the one presented in [7] is the most famous. It works in\ntheBAMP n;t[t<n=3]model, and requires three consecutive communication steps.\nThe one presented in [19] works in the more constrained BAMP n;t[t<n=5]model,\nbut needs only two consecutive communication steps. These algorithms show a\ntrade-off between optimal t-resilience and time-efficiency.\n6.2 Proof of the algorithm in BAMP n;t[t<n=3]\nThe proof has the same structure, and is nearly the same, as the one for the process-\ncrash model presented in Section 5.2.\nNotation and high-level intuition trfsn\nj(k;v)now denotes a money transfer (or\nthe associated processing event by a process) that correct processes br-deliver\nfrom pjwith sequence number sn. Ifpjis a correct process, this definition is the\nsame as the one used in the model CAMP n;t[;]. Ifpjis Byzantine, TRANSFER\nmessages from pjdo not necessarily correspond to actual transfer ()invocations\nbypj, but the BRB-Termination-2 property guarantees that all correct processes\nbr-deliver the same set of TRANSFER messages (with the same sequence num-\nbers), and therefore agree on how pj’s behavior should be interpreted. The reli-\nable broadcast thus ensures a form of weak agreement among correct processes in\nspite of Byzantine failures. This weak agreement is what allows us to move al-\nmost seamlessly from a crash-failure model to a Byzantine model, with no change\nto the algorithm, and only a limited adaptation of its proof.\nMore concretely, Lemma 2 (for crash failures) becomes the next lemma whose\nproof is the same as for Lemma 2 in which the reference to the CBR-Termination-\n2 property is replaced by a reference to its BRB counterpart.\nLemma 4. If acorrect process piprocesses trfsn\nj(k;v), then any correct process\nprocesses it.\nSimilarly, Lemma 3 turns into its Byzantine counterpart, lemma 5.\nLemma 5. If acorrect process pibr-broadcasts a money transfer trfsn\ni(k;v)(line 3),\nany correct processes eventually br-delivers and processes it.\nProof When a correct process pibr-broadcasts a money transfer trfsn\ni(k;v), we\nhave (sn=deli[i]+1)^(account i[i]\u0015v), thus when it br-delivers it the predicate\nof line 7 is satisfied. By Lemma 4, all the correct processes process this money\ntransfer. \u0003Lemma 5\nTheorem 2. Algorithm 1instantiated with BR-Broadcast implements a money\ntransfer object in the system BAMP n;t[t<n=3]model, and ensures that all opera-\ntions by correct processes terminate.\nThe model constraint t<n=3is due only to the fact that Algorithm 1 uses BR-\nbroadcast (for which t<n=3is both necessary and sufficient). As the invocations\nofbalance ()by Byzantine processes may return arbitrary values and do not im-\npact the correct processes, they are not required to appear in their local histories.\nProof The proof that the operations issued by the correct processes terminate is\nthe same as in Lemma 1 where the CRB-Termination properties are replaced by\ntheir BRB-Termination counterparts.\nTo prove MT-compliance, let us first construct mock local histories for Byzan-\ntine processes: the mock local history Liassociated with a Byzantine process pjis\nthe sequence of money transfers from pjthat the correct processes br-deliver from\npjand that they process. (By Lemma 4 all correct processes process the same set\nof money transfers from pj).\nLetpibe a correct process and Sibe the sequence of operations occurring at\npidefined in the same way as in the crash failure model. In this construction, the\nfollowing properties are respected:\n\u000fFor all, trfsn\nj(k;v)2Ljthen\n–ifpjis correct, it br-broadcast this money transfer and, due to Lemma 5,\npiprocesses it, hence trfsn\nj(k;v)2Si.\n–ifpjis Byzantine, due to the definition of Lj(sequence of money trans-\nfers that correct processes br-delivers from pjand process), we have\ntrfsn\nj(k;v)2Si.\n\u000fFor all op1=trfsn\nj(k;v)and op2=trfsn0\nj(k0;v0)(two transfers in Lj\u0012Si)\nsuch that op1!jop2, we have sn<sn0, consequently piprocesses op1\nbefore op2, and we have op1!Siop2.\n\u000fFor all both op1and op2belonging to Li, their serialization order is the\nsame in Lias in Si(same as for the crash case).\nIt follows that Siis a serialization of Ai;T(˜H)where ˜H=(L1;::;Ln),Libeing the\nsequence of its operations if piis correct, and a mock sequence of money transfers,\nif it is Byzantine. The same arguments that were used in the crash failure model\ncan be used here to prove that Siis MT-compliant. Since all correct processes\nobserve the same mock sequence of operations Ljfor any given Byzantine pro-\ncess pj, it follows that the algorithm implements an MT-compliant money transfer\nobject in BAMP n;t[t<n=3]. \u0003Theorem 2\n6.3 Extending to incomplete Byzantine networks\nAn algorithm is described in [31] which simulates a fully connected (point-to-\npoint) network on top of an asynchronous Byzantine message-passing system in\nwhich, while the underlying communication network is incomplete (not all the\npairs of processes are connected by a channel), it is (2t+1)-connected (i.e., any\npair of processes is connected by (2t+1)disjoint paths12). Moreover, it is shown\nthat this connectivity requirement is both necessary and sufficient.13\nHence, denoting BAMP n;t[t<n=3;(2t+1)-connected ]such a system model,\nthis algorithm builds BAMP n;t[t<n=3]on top BAMP n;t[t<n=3;(2t+1)-connected ]\n(both models have the same computability power). It follows that the previous\nmoney-transfer algorithm works in incomplete (2t+1)-connected asynchronous\nByzantine systems where t<n=3.\n7 Conclusion\nThe article has revisited the synchronization side of the money-transfer problem in\nfailure-prone asynchronous message-passing systems. It has presented a generic\nalgorithm that solves money transfer in asynchronous message-passing systems\nwhere processes may experience failures. This algorithm uses an underlying reli-\nable broadcast communication abstraction, which differs according to the type of\nfailures (process crashes or Byzantine behaviors) that processes can experience.\n12“Disjoint” means that, given any pair of processes pandq, any two paths connecting pand\nqshare no process other than pandq. Actually, the (2t+1)-connectivity is required only for any\npair of correct processes (which are not known in advance).\n13This algorithm is a simple extension to asynchronous systems of a result first established\nin [11] in the context of synchronous Byzantine systems.\nIn addition to its genericity (and modularity), the proposed algorithm is sur-\nprisingly simple14and particularly efficient (in addition to money-transfer data,\neach message generated by the algorithm only carries one sequence number). As\na side effect, this algorithm has shown that, in the crash failure model, money\ntransfer is a weaker problem than the construction of a read/write register. As far\nas the Byzantine failure model is concerned, we conjecture that t<n=3is a nec-\nessary requirement for money transfer (as it is for the construction of a read/write\nregister [18]).\nFinally, it is worth noticing that this article adds one more member to the fam-\nily of algorithms that strive to “unify” the crash failure model and the Byzantine\nfailure model as studied in [6, 12, 20, 26].\nAcknowledgments\nThis work was partially supported by the French ANR projects 16-CE40-0023-03\nDESCARTES, devoted to layered and modular structures in distributed comput-\ning, and ANR-16-CE25-0005-03 O’Browser, devoted to decentralized applica-\ntions on browsers.\nReferences\n[1] Afek Y ., Attiya H., Dolev D., Gafni E., Merritt M., and Shavit N., Atomic snapshots\nof shared memory. Journal of the ACM , 40(4):873-890 (1993)\n[2] Ahamad M., Neiger G., Burns J.E., Hutto P.W., and Kohli P., Causal memory: defi-\nnitions, implementation and programming. Distributed Computing , 9:37–49 (1995)\n[3] Aigner M. and Ziegler G., Proofs from THE BOOK (4th edition). Springer,\n274 pages, ISBN 978-3-642-00856-6 (2010)\n[4] Attiya H., Efficient and robust sharing of memory in message-passing systems. Jour-\nnal of Algorithms , 34(1):109-127 (2000)\n[5] Attiya H., Bar-Noy A., and Dolev D., Sharing memory robustly in message-passing\nsystems. Journal of the ACM , 42(1):121-132 (1995)\n[6] Bazzi, R. and Neiger, G.. Optimally simulating crash failures in a byzantine environ-\nment. Proc. 6th Workshop on Distributed Algorithms (WDAG’91) , Springer LNCS\n579, pp. 108–128 (1991)\n[7] Bracha G., Asynchronous Byzantine agreement protocols. Information & Computa-\ntion, 75(2):130-143 (1987)\n14Let us recall that, in sciences, simplicity is a first class property [3]. As stated by A. Perlis —\nrecipient of the first Turing Award — “Simplicity does not precede complexity, but follows it”.\n[8] Bracha G. and Toueg S., Asynchronous consensus and broadcast protocols. Journal\nof the ACM , 32(4):824-840 (1985)\n[9] Cachin Ch., Guerraoui R., and Rodrigues L., Reliable and secure distributed pro-\ngramming , Springer, 367 pages, ISBN 978-3-642-15259-7 (2011)\n[10] Collins D., Guerraoui R., Komatovic J., Monti M., Xygkis A., Pavlovic M.,\nKuznetsov P., Pignolet Y .-A., Seredinschi D.A., and Tonlikh A., Online payments\nby merely broadcasting messages. Proc. 50th IEEE/IFIP Int’l Conference on De-\npendable Systems and Networks (DSN’20) , 10 pages (2020)\n[11] Dolev D., The Byzantine general strike again. Journal of Algorithms , 3:14-30 (1982)\n[12] Dolev D. and Gafni E., Some garbage in - some garbage out: asynchronous t-\nByzantine as asynchronous benign t-resilient system with fixed t-Trojan horse in-\nputs. Tech Report , arXiv:1607.01210, 14 pages (2016)\n[13] Fernández Anta A., Konwar M.K., Georgiou Ch., and Nicolaou N.C., Formalizing\nand implementing distributed ledger objects, SIGACT News , 49(2):58-76 (2018)\n[14] Guerraoui R., Kuznetsov P., Monti M.,Pavlovic M., Seredinschi D.A., The con-\nsensus number of a cryptocurrency. Proc. 38th ACM Symposium on Principles of\nDistributed Computing (PODC’19) , ACM Press, pp. 307–316 (2019)\n[15] Gupta S., A non-consensus based decentralized financial transaction processing\nmodel with support for efficient auditing. Master Thesis , Arizona State University,\n83 pages (2016)\n[16] Hadzilacos V . and Toueg S., A modular approach to fault-tolerant broadcasts and\nrelated problems. Tech Report 94-1425 , 83 pages, Cornell University (1994)\n[17] Herlihy M.P. and Wing J.M, Linearizability: a correctness condition for concurrent\nobjects. ACM Transactions on Programming Languages and Systems , 12(3):463-\n492 (1990)\n[18] Imbs D., Rajsbaum S., Raynal M., and Stainer J., Read/write shared memory ab-\nstraction on top of an asynchronous Byzantine message-passing system. Journal of\nParallel and Distributed Computing , 93-94:1-9 (2016)\n[19] Imbs D. and Raynal M., Trading t-resilience for efficiency in asynchronous Byzan-\ntine reliable broadcast. Parallel Processing Letters , V ol. 26(4), 8 pages (2016)\n[20] Imbs D., Raynal M., and Stainer J., Are Byzantine failures really different from crash\nfailures? Proc. 30th Symposium on Distributed Computing (DISC’16) , Springer\nLNCS 9888, pp. 215-229 (2016)\n[21] Knuth D.E., Ancient Babylonian algorithms. Communications o of the ACM ,\n15(7):671-677 (1972)\n[22] Lamport L., On interprocess communication, Part I: basic formalism; Part II: algo-\nrithms. Distributed Computing , 1(2):77-101 (1986)\n[23] Lamport L., Shostack R., and Pease M., The Byzantine generals problem. ACM\nTransactions on Programming Languages and Systems , 4(3)-382-401 (1982)\n[24] Mostéfaoui A. and Raynal M., Two-bit messages are sufficient to implement atomic\nread/write registers in crash-prone systems. Proc. 35th ACM Symposium on Princi-\nples of Distributed Computing (PODC’16) , ACM Press, pp. 381-390 (2016)\n[25] Nakamoto S., Bitcoin: a peer-to-peer electronic cash system.\nhttps://bitcoin.org/bitcoin.pdf (2008) [last accessed March 31, 2020]\n[26] Neiger, G. and Toueg, S., Automatically increasing the fault-tolerance of distributed\nalgorithms. Journal of Algorithms ; 11(3): 374-419 (1990)\n[27] Neugebauer O., The exact sciences in antiquity . Brown University press, 240 pages\n(1957)\n[28] Pease M., Shostak R., and Lamport L., Reaching agreement in the presence of faults.\nJournal of the ACM , 27:228-234 (1980)\n[29] Raynal M., Concurrent programming: algorithms, principles and foundations .\nSpringer, 515 pages, ISBN 978-3-642-32026-2 (2013)\n[30] Raynal M., Fault-tolerant message-passing distributed systems: an algorithmic ap-\nproach. Springer , 550 pages, ISBN: 978-3-319-94140-0 (2018)\n[31] Raynal M., From incomplete to complete networks in asynchronous Byzantine sys-\ntems. Tech report , 10 pages (2020)\n[32] Riesen A., Satoshi Nakamoto and the financial crisis of 2008.\nhttps://andrewriesen.me/2017/12/18/2017-12-18-satoshi-nakamoto-and-the-\nfinancial-crisis-of-2008/ [last accessed April 22, 2020]\n[33] Taubenfeld G., Synchronization algorithms and concurrent programming . Pearson\nEducation/Prentice Hall, 423 pages, ISBN 0-131-97259-6 (2006).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qJXXA0F5aj3", "year": null, "venue": "Bull. EATCS 2016", "pdf_link": "http://eatcs.org/beatcs/index.php/beatcs/article/download/439/419", "forum_link": "https://openreview.net/forum?id=qJXXA0F5aj3", "arxiv_id": null, "doi": null }
{ "title": "The Presburger Award for Young Scientists 2017 - Call for Nominations", "authors": [ "Marta Kwiatkowska" ], "abstract": null, "keywords": [], "raw_extracted_content": "ThePresburger Award\nforYoung Scientists 2017\nCall for Nominations\nDeadline : 31 D ecember 2016\nStarting in 2010, the European Association for Theoretical Computer Science\n(EATCS) established the Presburger Award. The Award is conferred annually at\nthe International Colloquium on Automata, Languages and Programming (ICALP)\nto a young scientist (in exceptional cases to several young scientists) for outstand-\ning contributions in theoretical computer science, documented by a published pa-\nper or a series of published papers.\nThe Award is named after Moj˙ zesz Presburger who accomplished his path-\nbreaking work on decidability of the theory of addition (which today is called\nPresburger arithmetic) as a student in 1929.\nNominations for the Presburger Award can be submitted by any member or\ngroup of members of the theoretical computer science community except the\nnominee and his /her advisors for the master thesis and the doctoral dissertation.\nNominated scientists have to be at most 35 years at the time of the deadline of\nnomination (i.e., for the Presburger Award of 2017 the date of birth should be\nin 1981 or later). The Presburger Award Committee of 2017 consists of Stephan\nKreutzer (TU Berlin), Marta Kwiatkowska (Oxford, chair) and Jukka Suomela\n(Aalto). Nominations, consisting of a two page justification and (links to) the re-\nspective papers, as well as additional supporting letters, should be sent by e-mail\nto:\nMarta Kwiatkowska\[email protected]\nThe subject line of every nomination should start with Presburger Award 2017 ,\nand the message must be received before December 31st, 2016 .\nThe award includes an amount of 1000 Euro and an invitation to ICALP 2017\nfor a lecture.\nPrevious Winners:\nMikołaj Boja ´nczyk, 2010 Patricia Bouyer-Decitre, 2011\nVenkatesan Guruswami, 2012 Mihai P ˘atra¸ scu, 2012\nErik Demaine, 2013 David Woodru \u000b, 2014\nXi Chen, 2015 Mark Braverman, 2016\nO\u000ecial website: http://www.eatcs.org/index.php/presburger", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xUsw0uhpuNI", "year": null, "venue": "Bull. EATCS 2021", "pdf_link": "http://bulletin.eatcs.org/index.php/beatcs/article/download/657/712", "forum_link": "https://openreview.net/forum?id=xUsw0uhpuNI", "arxiv_id": null, "doi": null }
{ "title": "The EATCS Award 2021 - Laudatio for Toniann (Toni) Pitassi", "authors": [ "Marta Kwiatkowska" ], "abstract": null, "keywords": [], "raw_extracted_content": "TheEATCS A ward 2021\nLaudatio for Toniann (Toni) Pitassi\nThe EATCS Award 2021 is awarded to\nToniann (Toni) Pitassi\nUniversity of Toronto, as the recipient of the 2021 EATCS Award for her fun-\ndamental and wide-ranging contributions to computational complexity, which in-\ncludes proving long-standing open problems, introducing new fundamental mod-\nels, developing novel techniques and establishing new connections between dif-\nferent areas. Her work is very broad and has relevance in computational learning\nand optimisation, verification and SAT-solving, circuit complexity and communi-\ncation complexity, and their applications.\nThe first notable contribution by Toni Pitassi was to develop lifting theorems:\na way to transfer lower bounds from the (much simpler) decision tree model for\nany function f, to a lower bound, the much harder communication complexity\nmodel, for a simply related (2-party) function f’. This has completely transformed\nour state of knowledge regarding two fundamental computational models, query\nalgorithms (decision trees) and communication complexity, as well as their rela-\ntionship and applicability to other areas of theoretical computer science. These\npowerful and flexible techniques resolved numerous open problems (e.g., the su-\nper quadratic gap between probabilistic and quantum communication complex-\nity), many of which were central challenges for decades.\nToni Pitassi has also had a remarkable impact in proof complexity. She in-\ntroduced the fundamental algebraic Nullstellensatz and Ideal proof systems, and\nthe geometric Stabbing Planes system. She gave the first nontrivial lower bounds\non such long-standing problems as weak pigeon-hole principle and models like\nconstant-depth Frege proof systems. She has developed new proof techniques for\nvirtually all proof systems, and new SAT algorithms. She found novel connections\nof proof complexity, computational learning theory, communication complexity,\ncircuit complexity, LP hierarchies, graph theory and more.\nIn the past few years Toni Pitassi has turned her attention to the field of algo-\nrithmic fairness, whose social importance is rapidly growing, in particular provid-\ning novel concepts and solutions based on causal modelling.\nSummarising, Toni Pitassi’s contributions have transformed the field of com-\nputational complexity and neighbouring areas of theoretical computer science,\nand will continue to have a lasting impact. Furthermore, she is an outstanding\nmentor, great teacher and a dedicated TCS community member.\nThe EATCS Award Committee 2021\nˆJohan Håstad\nˆMarta Kwiatkowska (chair)\nˆÉva Tardos\nThe EATCS Award is given to acknowledge extensive and widely recognized\ncontributions to theoretical computer science over a life-long scientific career.\nThe Award will be assigned during a ceremony that will take place during\nICALP 2021, where the recipient will give an invited presentation during the\nAward Ceremony.\nThe following is the list of the previous recipients of the EATCS Awards:\n2020 Mihalis Yannakakis 2009 Gérard Huet\n2019 Thomas Henzinger 2008 Leslie G. Valiant\n2018 Noam Nisan 2007 Dana S. Scott\n2017 Éva Tardos 2006 Mike Paterson\n2016 Dexter Kozen 2005 Robin Milner\n2015 Christos Papadimitriou 2004 Arto Salomaa\n2014 Gordon Plotkin 2003 Grzegorz Rozenberg\n2013 Martin Dyer 2002 Maurice Nivat\n2012 Moshe Y . Vardi 2001 Corrado Böhm\n2011 Boris (Boaz) Trakhtenbrot 2000 Richard Karp\n2010 Kurt Mehlhorn", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]