reference
stringlengths
141
444k
target
stringlengths
31
68k
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Over the last decade, the availability of public image repositories and recognition benchmarks has enabled rapid progress in visual object category and instance detection. Today we are witnessing the birth of a new generation of sensing technologies capable of providing high quality synchronized videos of both color and depth, the RGB-D (Kinect-style) camera. With its advanced sensing capabilities and the potential for mass adoption, this technology represents an opportunity to dramatically increase robotic object recognition, manipulation, navigation, and interaction capabilities. In this paper, we introduce a large-scale, hierarchical multi-view object dataset collected using an RGB-D camera. The dataset contains 300 objects organized into 51 categories and has been made publicly available to the research community so as to enable rapid progress based on this promising technology. This paper describes the dataset collection procedure and introduces techniques for RGB-D based object recognition and detection, demonstrating that combining color and depth information substantially improves quality of results. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Datasets for Fine-Tuning <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB011
The nature of the datasets used in fine-tuning is the key to learning discriminative CNN features. ImageNet BIB003 only provides images with class labels. So the pre-trained CNN model is competent in discriminating images of different object/scene classes, but may be less effective to tell the difference between images that fall in the same class (e.g., architecture) but depict different instances (e.g., "Eiffel Tower" and "Notre-Dame"). Therefore, it is important to fine-tune the CNN model on task-oriented datasets. The datasets having been used for fine-tuning in recent years are shown in Table 3 . Buildings and common objects are the focus. The milestone work on fine-tuning is BIB006 . It collects the Landmarks dataset by a semi-automated approach: automated searching for the popular landmarks in Yandex search engine, followed by a manual estimation of the proportion of relevant image among the top ranks. This dataset contains 672 classes of various architectures, and the finetuned network produces superior features on landmark related datasets such as Oxford5k BIB001 and Holidays BIB002 , but has decreased performance on Ukbench where common objects are presented. Babenko et al. BIB006 have also fine-tuned CNNs on the Multi-view RGB-D dataset BIB004 containing turntable views of 300 household objects, in order to improve performance on Ukbench. The Landmark dataset is later used by Gordo et al. BIB008 for fine-tuning, after an automatic cleaning approach based on SIFT matching. In BIB009 , Radenovi c et al. employ the retrieval and Structure-From-Motion methods to build 3D landmark models so that images depicting the same architecture can be grouped. Using this labeled dataset, the linear discriminative projections (denoted as L w in Table 5 ) outperform the previous whitening technique BIB005 . Another dataset called Tokyo Time Machine is collected using Google Street View Time Machine which provides images depicting the same places over time BIB010 . While most of the above datasets focus on landmarks, Bell et al. BIB007 build a Product dataset consisting of furniture by developing a crowdsourced pipeline to draw connections between in-situ objects and the corresponding products. It is also feasible to finetune on the query sets suggested in BIB011 , but this method may not be adaptable to new query types.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Popular sites like Houzz, Pinterest, and LikeThatDecor, have communities of users helping each other answer questions about products in images. In this paper we learn an embedding for visual search in interior design. Our embedding contains two different domains of product images: products cropped from internet scenes, and products in their iconic form. With such a multi-domain embedding, we demonstrate several applications of visual search including identifying products in scenes and finding stylistically similar products. To obtain the embedding, we train a convolutional neural network on pairs of images. We explore several training architectures including re-purposing object classifiers, using siamese networks, and using multitask learning. We evaluate our search quantitatively and qualitatively and demonstrate high quality results for search across multiple visual domains, enabling new applications in interior design. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Networks in Fine-Tuning <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB006
The CNN architectures used in fine-tuning mainly fall into two types: the classification-based network and the verification-based network. The classification-based network is trained to classify architectures into pre-defined categories. Since there is usually no class overlap between the training set and the query images, the learned embedding e.g., FC6 or FC7 in AlexNet, is used for Euclidean distance based retrieval. This train/test strategy is employed in BIB001 , in which the last FC layer is modified to have 672 nodes corresponding to the number of classes in the Landmark dataset. The verification network may either use a siamese network with pairwise loss or use a triplet loss and has been more widely employed for fine-tuning. A standard siamese network based on AlexNet and the contrastive loss is employed in BIB002 . In BIB004 , Radenovi c et al. propose to replace the FC layers with a MAC layer BIB003 . Moreover, with the 3D architecture models built in BIB004 , training pairs can be mined. Positive image pairs are selected based on the number of co-observed 3D points (matched SIFT features), while hard negatives are defined as those with small distances in their CNN descriptors. These image pairs are fed into the siamese network, and the contrastive loss is calculated from the ' 2 normalized MAC features. In a concurrent work to BIB004 , Gordo et al. BIB005 fine-tune a triplet-loss network and a region proposal network on the Landmark dataset BIB001 . The superiority of BIB005 consists in its localization ability, which excludes the background in feature learning and extraction. In both works, the fine-tuned models exhibit state-of-the-art accuracy on landmark retrieval datasets including Oxford5k, Paris6k and Holidays, and also good generalization ability on Ukbench ( Table 5 ). In BIB006 , a VLAD-like layer is plugged in the network at the last convolutional layer which is amenable to training via back-propagation. Meanwhile, a new triplet loss is designed to make use of the weakly supervised Google Street View Time Machine data.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> In the well-known Bag-of-Words model, local features, such as the SIFT descriptor, are extracted and quantized into visual words. Then, an index is created to reduce computational burden. However, local clues serve as low-level representations that can not represent high-level semantic concepts. Recently, the success of deep features extracted from convolutional neural networks(CNN) has shown promising results toward bridging the semantic gap. Inspired by this, we attempt to introduce deep features into inverted index based image retrieval and thus propose the DeepIndex framework. Moreover, considering the compensation of different deep features, we incorporate multiple deep features from different fully connected layers, resulting in the multiple DeepIndex. We find the optimal integration of one midlevel deep feature and one high-level deep feature, from two different CNN architectures separately. This can be treated as an attempt to further reduce the semantic gap. Extensive experiments on three benchmark datasets demonstrate that, the proposed DeepIndex method is competitive with the state-of-the-art on Holidays(85:65% mAP), Paris(81:24% mAP), and UKB(3:76 score). In addition, our method is efficient in terms of both memory and time cost. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> Fisher Vectors (FV) and Convolutional Neural Networks (CNN) are two image classification pipelines with different strengths. While CNNs have shown superior accuracy on a number of classification tasks, FV classifiers are typically less costly to train and evaluate. We propose a hybrid architecture that combines their strengths: the first unsupervised layers rely on the FV while the subsequent fully-connected supervised layers are trained with back-propagation. We show experimentally that this hybrid architecture significantly outperforms standard FV systems without incurring the high cost that comes with CNNs. We also derive competitive mid-level features from our architecture that are readily applicable to other class sets and even to new tasks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> This work proposes a simple instance retrieval pipeline based on encoding the convolutional features of CNN using the bag of words aggregation scheme (BoW). Assigning each local array of activations in a convolutional layer to a visual word produces an assignment map, a compact representation that relates regions of an image with a visual word. We use the assignment map for fast spatial reranking, obtaining object localizations that are used for query expansion. We demonstrate the suitability of the BoW representation based on local CNN features for instance retrieval, achieving competitive performance on the Oxford and Paris buildings benchmarks. We show that our proposed system for CNN feature aggregation with BoW outperforms state-of-the-art techniques using sum pooling at a subset of the challenging TRECVid INS benchmark. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Feature Encoding and Indexing <s> The Convolutional Neural Networks (CNNs) have achieved breakthroughs on several image retrieval benchmarks. Most previous works re-formulate CNNs as global feature extractors used for linear scan. This paper proposes a Multi-layer Orderless Fusion (MOF) approach to integrate the activations of CNN in the Bag-of-Words (BoW) framework. Specifically, through only one forward pass in the network, we extract multi-layer CNN activations of local patches. Activations from each layer are aggregated in one BoW model, and several BoW models are combined with late fusion. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method. <s> BIB011
The encoding/indexing procedure of hybrid methods resembles SIFT-based retrieval, e.g., VLAD/FV encoding under a small codebook or the inverted index under a large codebook. The VLAD/FV encoding, such as BIB003 , BIB006 , follow the standard practice in the case of SIFT features BIB001 , BIB002 , so we do not detail here. On the other hand, several works exploit the inverted index on the patch-based CNN features BIB010 , BIB007 , BIB011 . Again, standard techniques in SIFT-based methods such as HE are employed BIB011 . Apart from the abovementioned strategies, we notice that several works BIB004 , BIB005 , BIB008 extract several region descriptors per image to do a many-to-many matching, called "spatial search" BIB004 . This method improves the translation and scale invariance of the retrieval system but may encounter efficiency problems. A reverse strategy to applying encoding on top of CNN activations is to build a CNN structure (mainly consisting of FC layers) on top of SIFT-based representations such as FV. By training a classification model on natural images, the intermediate FC layer can be used for retrieval BIB009 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Relationship between SIFT-and CNN-Based Methods <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009
In this survey, we categorize current literature into six finegrained classes. The differences and some representative works of the six categories are summarized in Tables 1 and 5 . Our observation goes below. First, the hybrid method can be viewed as a transition zone from SIFT-to CNN-based methods. It resembles the SIFT-based methods in all the aspects except that it extracts CNN features as the local descriptor. Since the network is accessed multiple times during patch feature extraction, the efficiency of the feature extraction step may be compromised. Second, the single-pass CNN methods tend to combine the individual steps in the SIFT-based and hybrid methods. In Table 5 , the "pre-trained single-pass" category integrates the feature detection and description steps; in the "finetuned single-pass" methods, the image-level descriptor is usually extracted in an end-to-end mode, so that no separate encoding process is needed. In BIB009 , a "PCA" layer is integrated for discriminative dimension reduction, making a further step towards end-to-end feature learning. Third, fixed-length representations are gaining more popularity due to efficiency considerations. It can be obtained by aggregating local descriptors (SIFT or CNN) BIB006 , BIB001 , BIB003 , BIB004 , direct pooling BIB007 , BIB008 , or end-to-end feature computation BIB005 , BIB009 . Usually, dimension reduction methods such as PCA can employed on top of the fixedlength representations, and ANN search methods such as PQ BIB001 or hashing BIB002 can be used for fast retrieval.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Semantic hashing[1] seeks compact binary codes of data-points so that the Hamming distance between codewords correlates with semantic similarity. In this paper, we show that the problem of finding a best code for a given dataset is closely related to the problem of graph partitioning and can be shown to be NP hard. By relaxing the original problem, we obtain a spectral method whose solutions are simply a subset of thresholded eigenvectors of the graph Laplacian. By utilizing recent results on convergence of graph Laplacian eigenvectors to the Laplace-Beltrami eigenfunctions of manifolds, we show how to efficiently calculate the code of a novel data-point. Taken together, both learning the code and applying it to a novel point are extremely simple. Our experiments show that our codes outperform the state-of-the art. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> SIFT-like local feature descriptors are ubiquitously employed in computer vision applications such as content-based retrieval, video analysis, copy detection, object recognition, photo tourism, and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Second, descriptors are usually high dimensional (e.g., SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Hashing and Instance Retrieval <s> Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics. <s> BIB006
Hashing is a major solution to the approximate nearest neighbor problem. It can be categorized into locality sensitive hashing (LSH) BIB001 and learning to hash. LSH is dataindependent and is usually outperformed by learning to hash, a data-dependent hashing approach. For learning to hash, a recent survey BIB006 categorizes it into quantization and pairwise similarity preserving. The quantization methods are briefly discussed in Section 3.3.2. For the pairwise similarity preserving methods, some popular hand-crafted methods include Spectral hashing BIB002 , LDA hashing BIB003 , etc. Recently, hashing has seen a major shift from handcrafted to supervised hashing with deep neural networks. These methods take the original image as input and produce a learned feature before binarization BIB004 , BIB005 . Most of these methods, however, focus on class-level image retrieval, a different task with instance retrieval discussed in this survey. For instance retrieval, when adequate training data can be collected, such as architecture and pedestrians, the deep hashing methods may be of critical importance.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> We propose a simple and straightforward way of creating powerful image representations via cross-dimensional weighting and aggregation of deep convolutional neural network layer outputs. We first present a generalized framework that encompasses a broad family of approaches and includes cross-dimensional pooling and weighting steps. We then propose specific non-parametric schemes for both spatial- and channel-wise weighting that boost the effect of highly active spatial responses and at the same time regulate burstiness effects. We experiment on different public datasets for image search and show that our approach outperforms the current state-of-the-art for approaches based on pre-trained networks. We also provide an easy-to-use, open source implementation that reproduces our results. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It has also been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregation approaches developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptors. In this paper we investigate possible ways to aggregate local deep features to produce compact global descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides arguably the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Image Retrieval Datasets <s> Image representations derived from pre-trained Convolutional Neural Networks (CNNs) have become the new state of the art in computer vision tasks such as instance retrieval. This work explores the suitability for instance retrieval of image- and region-wise representations pooled from an object detection CNN such as Faster R-CNN. We take advantage of the object proposals learned by a Region Proposal Network (RPN) and their associated CNN features to build an instance search pipeline composed of a first filtering stage followed by a spatial reranking. We further investigate the suitability of Faster R-CNN features when the network is fine-tuned for the same objects one wants to retrieve. We assess the performance of our proposed system with the Oxford Buildings 5k, Paris Buildings 6k and a subset of TRECVid Instance Search 2013, achieving competitive results. <s> BIB010
Five popular instance retrieval datasets are used in this survey. Statistics of these datasets can be accessed in Table 4 . Holidays BIB002 is collected by J egou et al. from personal holiday albums, so most of the images are of various scene types. The database has 1,491 images composed of 500 groups of similar images. Each image group has 1 query, totaling 500 query images. Most SIFT-based methods employ the original images, except BIB004 , BIB005 which manually rotate the images into upright orientations. Many recent CNN-based methods BIB007 , BIB009 , BIB008 also use the rotated version of Holidays. In Table 5 , results of both versions of Holidays are shown (separated by "/"). Rotating the images usually brings 2-3 percent mAP improvement. Ukbench consists of 10,200 images of various content, such as objects, scenes, and CD covers. All the images are divided into 2,550 groups. Each group has four images depicting the same object/scene, under various angles, illuminations, translations, etc. Each image in this dataset is taken as the query in turn, so there are 10,200 queries. Oxford5k BIB001 is collected by crawling images from Flickr using the names of 11 different landmarks in Oxford. A total of 5,062 images form the image database. The dataset defines five queries for each landmark by hand-drawn bounding boxes, so that 55 query Regions of Interest (ROI) exist in total. Each database image is assigned one of four labels, good, OK, junk, or bad. The first two labels are true matches to the query ROIs, while "bad" denotes the distractors. In junk images, less than 25 percent of the objects are visible, or they undergo severe occlusion or distortion, so these images have zero impact on retrieval accuracy. Flickr100k BIB003 contains 99,782 high resolution images crawled from Flickr's 145 most popular tags. In literature, this dataset is typically added to Oxford5k to test the scalability of retrieval algorithms. Paris6k BIB003 is featured by 6,412 images crawled from 11 queries on specific Paris architecture. Each landmark has five queries, so there are also 55 queries with bounding boxes. The database images are annotated with the same four types of labels as Oxford5k. Two major evaluation protocols exist for Oxford5k and Paris6k. For SIFT-based methods, the cropped regions are usually used as query. For CNN-based methods, some employ the full-sized query images BIB006 , BIB009 ; some follow the standard cropping protocol, either by cropping the ROI and feeding it into CNN BIB007 or extracting CNN features using the full image and selecting those falling in the ROI BIB010 . Using the full image may lead to mAP improvement. These protocols are used in Table 5 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> The state of the art in visual object retrieval from large databases is achieved by systems that are inspired by text retrieval. A key component of these approaches is that local regions of images are characterized using high-dimensional descriptors which are then mapped to ldquovisual wordsrdquo selected from a discrete vocabulary.This paper explores techniques to map each visual region to a weighted set of words, allowing the inclusion of features which were lost in the quantization stage of previous systems. The set of visual words is obtained by selecting words based on proximity in descriptor space. We describe how this representation may be incorporated into a standard tf-idf architecture, and how spatial verification is modified in the case of this soft-assignment. We evaluate our method on the standard Oxford Buildings dataset, and introduce a new dataset for evaluation. Our results exceed the current state of the art retrieval performance on these datasets, particularly on queries with poor initial recall where techniques like query expansion suffer. Overall we show that soft-assignment is always beneficial for retrieval with large vocabularies, at a cost of increased storage requirements for the index. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This article improves recent methods for large scale image search. We first analyze the bag-of-features approach in the framework of approximate nearest neighbor search. This leads us to derive a more precise representation based on Hamming embedding (HE) and weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within an inverted file and are efficiently exploited for all images in the dataset. We then introduce a graph-structured quantizer which significantly speeds up the assignment of the descriptors to visual words. A comparison with the state of the art shows the interest of our approach when high accuracy is needed. ::: ::: Experiments performed on three reference datasets and a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short-list of images, is shown to be complementary to our weak geometric consistency constraints. Our approach is shown to outperform the state-of-the-art on the three datasets. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Burstiness, a phenomenon initially observed in text retrieval, is the property that a given visual element appears more times in an image than a statistically independent model would predict. In the context of image search, burstiness corrupts the visual similarity measure, i.e., the scores used to rank the images. In this paper, we propose a strategy to handle visual bursts for bag-of-features based image search systems. Experimental results on three reference datasets show that our method significantly and consistently outperforms the state of the art. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than both L2-based soft assignment and Hamming embedding. ::: ::: We show experimentally that the novel similarity function achieves mean average precision that is superior to any result published in the literature on a number of standard datasets. At the same time, retrieval with the proposed similarity function is faster than the reference method. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9% to 58.3%. Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper proposes an asymmetric Hamming Embedding scheme for large scale image search based on local descriptors. The comparison of two descriptors relies on an vector-to-binary code comparison, which limits the quantization error associated with the query compared with the original Hamming Embedding method. The approach is used in combination with an inverted file structure that offers high efficiency, comparable to that of a regular bag-of-features retrieval system. The comparison is performed on two popular datasets. Our method consistently improves the search quality over the symmetric version. The trade-off between memory usage and precision is evaluated, showing that the method is especially useful for short binary signatures. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> One fundamental problem in object retrieval with the bag-of-visual words (BoW) model is its lack of spatial information. Although various approaches are proposed to incorporate spatial constraints into the BoW model, most of them are either too strict or too loose so that they are only effective in limited cases. We propose a new spatially-constrained similarity measure (SCSM) to handle object rotation, scaling, view point change and appearance deformation. The similarity measure can be efficiently calculated by a voting-based method using inverted files. Object retrieval and localization are then simultaneously achieved without post-processing. Furthermore, we introduce a novel and robust re-ranking method with the k-nearest neighbors of the query for automatically refining the initial search results. Extensive performance evaluations on six public datasets show that SCSM significantly outperforms other spatial models, while k-NN re-ranking outperforms most state-of-the-art approaches using query expansion. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Most of the current image indexing systems for retrieval view a database as a set of individual images. It limits the flexibility of the retrieval framework to conduct sophisticated cross-image analysis, resulting in higher memory consumption and sub-optimal retrieval accuracy. To conquer this issue, we propose cross indexing with grouplets, where the core idea is to view the database images as a set of grouplets, each of which is defined as a group of highly relevant images. Because a grouplet groups similar images together, the number of grouplets is smaller than the number of images, thus naturally leading to less memory cost. Moreover, the definition of a grouplet could be based on customized relations, allowing for seamless integration of advanced image features and data mining techniques like the deep convolutional neural network (DCNN) in off-line indexing . To validate the proposed framework, we construct three different types of grouplets , which are respectively based on local similarity , regional relation, and global semantic modeling. Extensive experiments on public benchmark datasets demonstrate the efficiency and superior performance of our approach. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> This paper considers the task of image search using the Bag-of-Words (BoW) model. In this model, the precision of visual matching plays a critical role. Conventionally, local cues of a keypoint, e.g., SIFT, are employed. However, such strategy does not consider the contextual evidences of a keypoint, a problem which would lead to the prevalence of false matches. To address this problem and enable accurate visual matching, this paper proposes to integrate discriminative cues from multiple contextual levels, i.e., local, regional, and global, via probabilistic analysis. "True match" is defined as a pair of keypoints corresponding to the same scene location on all three levels (Fig. 1). Specifically, the Convolutional Neural Network (CNN) is employed to extract features from regional and global patches. We show that CNN feature is complementary to SIFT due to its semantic awareness and compares favorably to several other descriptors such as GIST, HSV, etc. To reduce memory usage, we propose to index CNN features outside the inverted file, communicated by memory-efficient pointers. Experiments on three benchmark datasets demonstrate that our method greatly promotes the search accuracy when CNN feature is integrated. We show that our method is efficient in terms of time cost compared with the BoW baseline, and yields competitive accuracy with the state-of-the-arts. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Most image instance retrieval pipelines are based on comparison of vectors known as global image descriptors between a query image and the database images. Due to their success in large scale image classification, representations extracted from Convolutional Neural Networks (CNN) are quickly gaining ground on Fisher Vectors (FVs) as state-of-the-art global descriptors for image instance retrieval. While CNN-based descriptors are generally remarked for good retrieval performance at lower bitrates, they nevertheless present a number of drawbacks including the lack of robustness to common object transformations such as rotations compared with their interest point based FV counterparts. ::: In this paper, we propose a method for computing invariant global descriptors from CNNs. Our method implements a recently proposed mathematical theory for invariance in a sensory cortex modeled as a feedforward neural network. The resulting global descriptors can be made invariant to multiple arbitrary transformation groups while retaining good discriminativeness. ::: Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated. We also show that our method which has few parameters is not prone to overfitting: improvements generalize well across datasets with different properties with regard to invariances. Finally, we show that our descriptors are able to compare favourably to other state-of-the-art compact descriptors in similar bitranges, exceeding the highest retrieval results reported in the literature on some datasets. A dedicated dimensionality reduction step --quantization or hashing-- may be able to further improve the competitiveness of the descriptors. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Performance Improvement Over the Years <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB017
We present the improvement in retrieval accuracy over the past ten years in Fig. 6 and the numbers of some representative methods in Table 5 . The results are computed using codebooks trained on independent datasets BIB002 . We can clearly observe that the field of instance retrieval has constantly been improving. The baseline approach (HKM) proposed over ten years ago only yields a retrieval accuracy of 59.7 percent, 2.85, 44.3 percent, 26.6, and 46.5 percent on Holidays, Ukbench, Oxford5k, Oxford5k+Flickr100k, and Paris6k, respectively. Starting from the baseline approaches , BIB001 , methods using large codebooks improve steadily when more discriminative codebooks BIB006 , spatial constraints , BIB008 , and complementary descriptors , BIB013 are introduced. For medium-sized codebooks, the most significant accuracy advance has been witnessed in the years 2008-2010 with the introduction of Hamming Embedding BIB002 , BIB004 and its improvements BIB005 , BIB004 , BIB009 . From then on, major improvements come from the strength of feature fusion BIB012 , BIB013 , BIB014 with the color and CNN features, especially on the Holidays and Ukbench datasets. On the other hand, CNN-based retrieval models have quickly demonstrated their strengths in instance retrieval. In the year 2012 when the AlexNet BIB010 was introduced, the performance of the off-the-shelf FC features is still far from satisfactory compared with SIFT models during the same period. For example, the FC descriptor of AlexNet pre-trained on ImageNet yields 64.2, 3.42, and 43.3 percent in mAP, N-S score, and mAP, respectively, on the Holidays, Ukbench, and Oxford5k datasets. These numbers are lower than BIB008 by 13.85 percent, 0.14 on Holidays and Ukbench, respectively, and lower than BIB011 by 31.9 percent on Oxford5k. However, with the advance in CNN architectures and finetuning strategies, the performance of the CNN-based 10,200 10,200 common objects Paris6k BIB003 6,412 55 buildings Oxford5k BIB001 5,062 55 buildings Flickr100k BIB003 99,782 -from Flickr's popular tags For each year, the best accuracy of each category is reported. For the compact representations, results of 128-bit vectors are preferentially selected. The purple star denotes the results produced by 2,048-dim vectors BIB015 , the best performance in fine-tuned CNN methods. Methods with a pink asterisk denote using rotated images on Holidays, full-sized queries on Oxford5k, or spatial verification and QE on Oxford5k (see Table 5 ). "+100k" -> the addition of Flickr100k into Oxford5k. "pw." -> power low normalization BIB007 . "MP" -> max pooling. "SP" -> sum pooling. * and parentheses -> results are obtained with post-processing steps such as spatial verification or QE. x -> numbers are estimated from the curves. y numbers are reported by our implementation. For Holidays, results using the rotated images are presented after "/". For Oxford5k (+100k) and Paris6k, results using the full-sized queries are shown after "/". \ -> the full query image is fed into the network, but only the features whose centers fall into the query region of interest are aggregated. Note that in many fixed-length representations, ANN algorithms such as PQ are not used to report the results, but ANN can be readily applied after PCA during indexing. methods is improving fast, being competitive on the Holidays and Ukbench datasets BIB015 , BIB016 , and slightly lower on Oxford5k but with much smaller memory cost BIB017 .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8% and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We consider the design of a single vector representation for an image that embeds and aggregates a set of local patch descriptors such as SIFT. More specifically we aim to construct a dense representation, like the Fisher Vector or VLAD, though of small or intermediate size. We make two contributions, both aimed at regularizing the individual contributions of the local descriptors in the final representation. The first is a novel embedding method that avoids the dependency on absolute distances by encoding directions. The second contribution is a "democratization" strategy that further limits the interaction of unrelated descriptors in the aggregation stage. These methods are complementary and give a substantial performance boost over the state of the art in image search with short or mid-size vectors, as demonstrated by our experiments on standard public image retrieval benchmarks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Recent works show that image comparison based on local descriptors is corrupted by visual bursts, which tend to dominate the image similarity. The existing strategies, like power-law normalization, improve the results by discounting the contribution of visual bursts to the image similarity. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Accuracy Comparisons <s> Visual search and image retrieval underpin numerous applications, however the task is still challenging predominantly due to the variability of object appearance and ever increasing size of the databases, often exceeding billions of images. Prior art methods rely on aggregation of local scale-invariant descriptors, such as SIFT, via mechanisms including Bag of Visual Words (BoW), Vector of Locally Aggregated Descriptors (VLAD) and Fisher Vectors (FV). However, their performance is still short of what is required. This paper presents a novel method for deriving a compact and distinctive representation of image content called Robust Visual Descriptor with Whitening (RVD-W). It significantly advances the state of the art and delivers world-class performance. In our approach local descriptors are rank-assigned to multiple clusters. Residual vectors are then computed in each cluster, normalized using a direction-preserving normalization function and aggregated based on the neighborhood rank. Importantly, the residual vectors are de-correlated and whitened in each cluster before aggregation, leading to a balanced energy distribution in each dimension and significantly improved performance. We also propose a new post-PCA normalization approach which improves separability between the matching and non-matching global descriptors. This new normalization benefits not only our RVD-W descriptor but also improves existing approaches based on FV and VLAD aggregation. Furthermore, we show that the aggregation framework developed using hand-crafted SIFT features also performs exceptionally well with Convolutional Neural Network (CNN) based features. The RVD-W pipeline outperforms state-of-the-art global descriptors on both the Holidays and Oxford datasets. On the large scale datasets, Holidays1M and Oxford1M, SIFT-based RVD-W representation obtains a mAP of 45.1 and 35.1 percent, while CNN-based RVD-W achieve a mAP of 63.5 and 44.8 percent, all yielding superior performance to the state-of-the-art. <s> BIB011
The retrieval accuracy of different categories on different datasets can be viewed in Fig. 6 , Tables 5 and 6 . From these results, we arrive at three observations. First, among the SIFT-based methods, those with medium-sized codebooks BIB001 , BIB003 , BIB007 usually lead to superior (or competitive) performance, while those based on small codebook (compact representations) BIB002 , BIB004 , BIB011 exhibit inferior accuracy. On the one hand, the visual words in the medium-sized codebooks lead to relatively high matching recall due to the large Voronoi cells. The further integration of HE methods largely improves the discriminative ability, achieving a desirable trade-off between matching recall and precision. On the other hand, although the visual words in small codebooks have the highest matching recall, their discriminative ability is not significantly improved due to the aggregation procedure and the small dimensionality. So its performance can be compromised. Second, among the CNN-based categories, the fine-tuned category BIB005 , BIB009 , BIB010 is advantageous in specific tasks (such as landmark/scene retrieval) which have similar data distribution with the training set. While this observation is within expectation, we find it interesting that the fine-tuned model proposed in BIB009 yields very competitive performance on generic retrieval (such as Ukbench) which has distinct data distribution with the training set. In fact, Babenko et al. BIB005 show that the CNN features fine-tuned on Landmarks compromise the accuracy on Ukbench. The generalization ability of BIB009 could be attributed to the effective training of the region proposal network. In comparison, using pre-trained models may exhibit high accuracy on Ukbench, but only yields moderate performance on landmarks. Similarly, the hybrid methods have fair performance on all the tasks, when it may still encounter efficiency problems BIB006 , BIB008 . Third, comparing all the six categories, the "CNN finetuned" and "SIFT mid voc." categories have the best overall accuracy, while the "SIFT small voc." category has a relatively low accuracy.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Given a query image of an object, our objective is to retrieve all instances of that object in a large (1M+) image database. We adopt the bag-of-visual-words architecture which has proven successful in achieving high precision at low recall. Unfortunately, feature detection and quantization are noisy processes and this can result in variation in the particular visual words that appear in different images of the same object, leading to missed results. In the text retrieval literature a standard method for improving performance is query expansion. A number of the highly ranked documents from the original query are reissued as a new query. In this way, additional relevant terms can be added to the query. This is a form of blind rele- vance feedback and it can fail if 'outlier' (false positive) documents are included in the reissued query. In this paper we bring query expansion into the visual domain via two novel contributions. Firstly, strong spatial constraints between the query image and each result allow us to accurately verify each return, suppressing the false positives which typically ruin text-based query expansion. Secondly, the verified images can be used to learn a latent feature model to enable the controlled construction of expanded queries. We illustrate these ideas on the 5000 annotated image Oxford building database together with more than 1M Flickr images. We show that the precision is substantially boosted, achieving total recall in many cases. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Most effective particular object and image retrieval approaches are based on the bag-of-words (BoW) model. All state-of-the-art retrieval results have been achieved by methods that include a query expansion that brings a significant boost in performance. We introduce three extensions to automatic query expansion: (i) a method capable of preventing tf-idf failure caused by the presence of sets of correlated features (confusers), (ii) an improved spatial verification and re-ranking step that incrementally builds a statistical model of the query object and (iii) we learn relevant spatial context to boost retrieval performance. The three improvements of query expansion were evaluated on standard Paris and Oxford datasets according to a standard protocol, and state-of-the-art results were achieved. <s> BIB005 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The paper addresses large scale image retrieval with short vector representations. We study dimensionality reduction by Principal Component Analysis (PCA) and propose improvements to its different phases. We show and explicitly exploit relations between i) mean subtrac- tion and the negative evidence, i.e., a visual word that is mutually miss- ing in two descriptions being compared, and ii) the axis de-correlation and the co-occurrences phenomenon. Finally, we propose an effective way to alleviate the quantization artifacts through a joint dimensionality re- duction of multiple vocabularies. The proposed techniques are simple, yet significantly and consistently improve over the state of the art on compact image representations. Complementary experiments in image classification show that the methods are generally applicable. <s> BIB006 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets. <s> BIB007 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Exploiting local feature shape has made geometry indexing possible, but at a high cost of index space, while a sequential spatial verification and re-ranking stage is still indispensable for large scale image retrieval. In this work we investigate an accelerated approach for the latter problem. We develop a simple spatial matching model inspired by Hough voting in the transformation space, where votes arise from single feature correspondences. Using a histogram pyramid, we effectively compute pair-wise affinities of correspondences without ever enumerating all pairs. Our Hough pyramid matching algorithm is linear in the number of correspondences and allows for multiple matching surfaces or non-rigid objects under one-to-one mapping. We achieve re-ranking one order of magnitude more images at the same query time with superior performance compared to state of the art methods, while requiring the same index space. We show that soft assignment is compatible with this matching scheme, preserving one-to-one mapping and further increasing performance. <s> BIB008 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the OverFeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the OverFeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or L2 distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. <s> BIB009 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> This paper provides an extensive study on the availability of image representations based on convolutional networks (ConvNets) for the task of visual instance retrieval. Besides the choice of convolutional layers, we present an efficient pipeline exploiting multi-scale schemes to extract local features, in particular, by taking geometric invariance into explicit account, i.e. positions, scales and spatial consistency. In our experiments using five standard image retrieval datasets, we demonstrate that generic ConvNet image representations can outperform other state-of-the-art methods if they are extracted appropriately. <s> BIB010 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB011 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets. <s> BIB012 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> In this paper, we demonstrate that the essentials of image classification and retrieval are the same, since both tasks could be tackled by measuring the similarity between images. To this end, we propose ONE (Online Nearest-neighbor Estimation), a unified algorithm for both image classification and retrieval. ONE is surprisingly simple, which only involves manual object definition, regional description and nearest-neighbor search. We take advantage of PCA and PQ approximation and GPU parallelization to scale our algorithm up to large-scale image search. Experimental results verify that ONE achieves state-of-the-art accuracy in a wide range of image classification and retrieval benchmarks. <s> BIB013 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations. Yet such models are not compatible with geometry-aware re-ranking methods and still outperformed, on some particular object retrieval benchmarks, by traditional image search systems relying on precise descriptor matching, geometric re-ranking, or query expansion. This work revisits both retrieval stages, namely initial search and re-ranking, by employing the same primitive information derived from the CNN. We build compact feature vectors that encode several image regions without the need to feed multiple inputs to the network. Furthermore, we extend integral images to handle max-pooling on convolutional layer activations, allowing us to efficiently localize matching objects. The resulting bounding box is finally used for image re-ranking. As a result, this paper significantly improves existing CNN-based recognition pipeline: We report for the first time results competing with traditional methods on the challenging Oxford5k and Paris6k datasets. <s> BIB014 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Deep convolutional neural networks have been successfully applied to image classification tasks. When these same networks have been applied to image retrieval, the assumption has been made that the last layers would give the best performance, as they do in classification. We show that for instance-level image retrieval, lower layers often perform better than the last layers in convolutional neural networks. We present an approach for extracting convolutional features from different layers of the networks, and adopt VLAD encoding to encode features into a single vector for each image. We investigate the effect of different layers and scales of input images on the performance of convolutional features using the recent deep networks OxfordNet and GoogLeNet. Experiments demonstrate that intermediate layers or higher layers with finer scales produce better results for image retrieval, compared to the last layer. When using compressed 128-D VLAD descriptors, our method obtains state-of-the-art results and outperforms other VLAD and CNN based approaches on two out of three test datasets. Our work provides guidance for transferring deep networks trained on image classification to image retrieval tasks. <s> BIB015 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB016 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Hough voting in a geometric transformation space allows us to realize spatial verification, but remains sensitive to feature detection errors because of the inflexible quantization of single feature correspondences. To handle this problem, we propose a new method, called adaptive dither voting, for robust spatial verification. For each correspondence, instead of hard-mapping it to a single transformation, the method augments its description by using multiple dithered transformations that are deterministically generated by the other correspondences. The method reduces the probability of losing correspondences during transformation quantization, and provides high robustness as regards mismatches by imposing three geometric constraints on the dithering process. We also propose exploiting the non-uniformity of a Hough histogram as the spatial similarity to handle multiple matching surfaces. Extensive experiments conducted on four datasets show the superiority of our method. The method outperforms its state-of-the-art counterparts in both accuracy and scalability, especially when it comes to the retrieval of small, rotated objects. <s> BIB017 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB018 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB019 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> The objective of this paper is the effective transfer of the Convolutional Neural Network (CNN) feature in image search and classification. Systematically, we study three facts in CNN transfer. 1) We demonstrate the advantage of using images with a properly large size as input to CNN instead of the conventionally resized one. 2) We benchmark the performance of different CNN layers improved by average/max pooling on the feature maps. Our observation suggests that the Conv5 feature yields very competitive accuracy under such pooling step. 3) We find that the simple combination of pooled features extracted across various CNN layers is effective in collecting evidences from both low and high level descriptors. Following these good practices, we are capable of improving the state of the art on a number of benchmarks to a large margin. <s> BIB020 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Efficiency Comparisons <s> Spatial verification is a crucial part of every image retrieval system, as it accounts for the fact that geometric feature configurations are typically ignored by the Bag-of-Words representation. Since spatial verification quickly becomes the bottleneck of the retrieval process, runtime efficiency is extremely important. At the same time, spatial verification should be able to reliably distinguish between related and unrelated images. While methods based on RANSAC’s hypothesize-and-verify framework achieve high accuracy, they are not particularly efficient. Conversely, verification approaches based on Hough voting are extremely efficient but not as accurate. In this paper, we develop a novel spatial verification approach that uses an efficient voting scheme to identify promising transformation hypotheses that are subsequently verified and refined. Through comprehensive experiments, we show that our method is able to achieve a verification accuracy similar to state-of-the-art hypothesize-and-verify approaches while providing faster runtimes than state-of-the-art voting-based methods. <s> BIB021
Feature Computation Time. For the SIFT-based methods, the dominating step is local feature extraction. Usually, it takes 1-2s for a CPU to extract the Hessian-Affine region based SIFT descriptors for a 640Â480 image, depending on the complexity (texture) of the image. For the CNN-based method, it takes 0.082 and 0.347 s for a single forward pass of a 224Â224 and 1,024Â768 image through VGG16 on a TitanX card, respectively. It is reported in BIB018 that four images (with largest side of 724 pixels) can be processed in 1 second. The encoding (VLAD or FV) time of the pretrained column features is very fast. For the CNN Hybrid methods, extracting CNN features out of tens of regions may take seconds. Overall speaking, the CNN pre-trained and fine-tuned models are efficient in feature computation using GPUs. Yet it should be noted that when using GPUs for SIFT extraction, high efficiency could also be achieved. Retrieval Time. The efficiency of nearest neighbor search is high for "SIFT large voc.", "SIFT small voc.", "CNN pretrained" and "CNN fine-tuned", because the inverted lists are short for a properly trained large codebook, and because the latter three have a compact representation to be accelerated by ANN search methods like PQ BIB004 . Efficiency for the medium-sized codebook is low because the inverted list contains more postings compared to a large codebook, and the filtering effect of HE methods can only correct this problem to some extent. The retrieval complexity for hybrid methods, as mentioned in Section 4.3, may suffer from the expensive many-to-many matching strategy BIB009 , BIB010 , BIB013 . Training Time. Training a large or medium-sized codebook usually takes several hours with AKM or HKM. Using small codebooks reduces the codebook training time. For the fine-tuned model, Gordo et al. BIB018 report using five days on a K40 GPU for the triplet-loss model. It may take less time for the siamese BIB019 or the classification models BIB011 , but should still much longer than SIFT codebook generation. Therefore, in terms of training, those using direct pooling BIB014 , BIB020 or small codebooks BIB003 , BIB015 are more time efficient. Memory Cost. Table 5 and Fig. 8 show that the SIFT methods with large codebooks and the compact representations are both efficient in memory cost. But the compact representations can be compressed into compact codes BIB006 using PQ or other competing quantization/hashing methods, so their memory consumption can be further reduced. In comparison, the methods using medium-sized codebooks are the most memory-consuming because the binary signatures should be stored in the inverted index. The hybrid methods somehow have mixed memory cost because the many-tomany strategy requires storing a number of region descriptors per image BIB009 , BIB013 while some others employ efficient encoding methods BIB012 , BIB016 . Spatial Verification and Query Expansion. Spatial verification which provides refined rank lists is often used in conjunction with QE. The RANSAC verification proposed in BIB001 has a complexity of Oðz 2 Þ, where z is the number of matched features. So this method is computationally expensive. The ADV approach BIB017 is less expensive with Oðz log zÞ complexity due to its ability to avoid unrelated Hough votes. The most efficient methods consist in BIB008 , BIB021 which has a complexity of OðzÞ, and BIB021 further outputs the transformation and inliers for QE. From the perspective of query expansion, since new queries are issued, search efficiency is compromised. For example, AQE BIB002 almost doubles the search time due to BIB007 , BIB005 , the proposed improvements only add marginal cost compared to performing another search, so their complexity is similar to basic QE methods.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, "web-scale " image corpora. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> Convolutional Neural Network (CNN) features have been successfully employed in recent works as an image descriptor for various vision tasks. But the inability of the deep CNN features to exhibit invariance to geometric transformations and object compositions poses a great challenge for image search. In this work, we demonstrate the effectiveness of the objectness prior over the deep CNN features of image regions for obtaining an invariant image representation. The proposed approach represents the image as a vector of pooled CNN features describing the underlying objects. This representation provides robustness to spatial layout of the objects in the scene and achieves invariance to general geometric transformations, such as translation, rotation and scaling. The proposed approach also leads to a compact representation of the scene, making each image occupy a smaller memory footprint. Experiments show that the proposed representation achieves state of the art retrieval results on a set of challenging benchmark image datasets, while maintaining a compact representation. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Important Parameters <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB004
We summarize the impact of codebook size on SIFT methods using large/medium-sized codebooks, and the impact of dimensionality on compact representations including SIFT small codebooks and CNN-based methods. Codebook Size. The mAP results on Oxford5k are drawn in Fig. 9 , and methods using large/medium-sized codebooks are compared. Two observations can be made. First, mAP usually increases with the codebook size but may reach saturation when the codebook is large enough. This is because a larger codebook improves the matching precision, but if it is too large, matching recall is lower, leading to saturated or even compromised performance BIB001 . Second, methods using the medium-sized codebooks have more stable performance when codebook size changes. This can be attributed to HE BIB002 , which contributes more for a smaller codebook, compensating the lower baseline performance. Dimensionality. The impact of dimensionality on compact vectors is presented in Fig. 7 . Our finding is that the retrieval accuracy usually remains stable under larger dimensions, and drops quickly when the dimensionality is below 256 or 128. Our second finding favors the methods based on region proposals BIB004 , BIB003 . These methods demonstrate very competitive performance under various feature lengths, probably due to their superior ability in object localization.
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> We propose a novel approach for instance-level image retrieval. It produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors. In contrast to previous works employing pre-trained deep networks as a black box to produce features, our method leverages a deep architecture trained for the specific task of image retrieval. Our contribution is twofold: (i) we leverage a ranking framework to learn convolution and projection weights that are used to build the region features; and (ii) we employ a region proposal network to learn which regions should be pooled to form the final global descriptor. We show that using clean training data is key to the success of our approach. To that aim, we use a large scale but noisy landmark dataset and develop an automatic cleaning approach. The proposed architecture produces a global image representation in a single forward pass. Our approach significantly outperforms previous approaches based on global descriptors on standard datasets. It even surpasses most prior works based on costly local descriptor indexing and spatial verification. Additional material is available at www.xrce.xerox.com/Deep-Image-Retrieval. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Generic Instance Retrieval <s> Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. <s> BIB004
A critical direction is to make the search engine applicable to generic search purpose. Towards this goal, two important issues should be addressed. First, large-scale instance-level datasets are to be introduced. While several instance datasets have been released as shown in Table 3 , these datasets usually contain a particular type of instances such as landmarks or indoor objects. Although the RPN structure used by Gordo et al. BIB003 has proven competitive on Ukbench in addition to the building datasets, it remains unknown if training CNNs on more generic datasets will bring further improvement. Therefore, the community is in great need of large-scale instance-level datasets or efficient methods for generating such a dataset in either a supervised or unsupervised manner. Second, designing new CNN architectures and learning methods are important in fully exploiting the training data. Previous works employ standard classification BIB001 , pairwise-loss BIB004 or Triplet-loss BIB003 , BIB002 CNN models for fine-tuning. The introduction of Faster R-CNN to instance retrieval is a promising starting point towards more accurate object localization BIB003 . Moreover, transfer learning methods are also important when adopting a fine-tuned model in another retrieval task .
SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> Detecting logos in photos is challenging. A reason is that logos locally resemble patterns frequently seen in random images. We propose to learn a statistical model for the distribution of incorrect detections output by an image matching algorithm. It results in a novel scoring criterion in which the weight of correlated keypoint matches is reduced, penalizing irrelevant logo detections. In experiments on two very different logo retrieval benchmarks, our approach largely improves over the standard matching criterion as well as other state-of-the-art approaches. <s> BIB001 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> This paper contributes a new high quality dataset for person re-identification, named "Market-1501". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset. <s> BIB002 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> Vehicle, as a significant object class in urban surveillance, attracts massive focuses in computer vision field, such as detection, tracking, and classification. Among them, vehicle re-identification (Re-Id) is an important yet frontier topic, which not only faces the challenges of enormous intra-class and subtle inter-class differences of vehicles in multicameras, but also suffers from the complicated environments in urban surveillance scenarios. Besides, the existing vehicle related datasets all neglect the requirements of vehicle Re-Id: 1) massive vehicles captured in real-world traffic environment; and 2) applicable recurrence rate to give cross-camera vehicle search for vehicle Re-Id. To facilitate vehicle Re-Id research, we propose a large-scale benchmark dataset for vehicle Re-Id in the real-world urban surveillance scenario, named “VeRi”. It contains over 40,000 bounding boxes of 619 vehicles captured by 20 cameras in unconstrained traffic scene. Moreover, each vehicle is captured by 2∼18 cameras in different viewpoints, illuminations, and resolutions to provide high recurrence rate for vehicle Re-Id. Finally, we evaluate six competitive vehicle Re-Id methods on VeRi and propose a baseline which combines the color, texture, and highlevel semantic information extracted by deep neural network. <s> BIB003 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the “Vector of Locally Aggregated Descriptors” image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks. <s> BIB004 </s> SIFT Meets CNN: A Decade Survey of Instance Retrieval <s> Towards Specialized Instance Retrieval <s> We address the problem of large-scale visual place recognition for situations where the scene undergoes a major change in appearance, for example, due to illumination (day/night), change of seasons, aging, or structural modifications over time such as buildings being built or destroyed. Such situations represent a major challenge for current large-scale place recognition methods. This work has the following three principal contributions. First, we demonstrate that matching across large changes in the scene appearance becomes much easier when both the query image and the database image depict the scene from approximately the same viewpoint. Second, based on this observation, we develop a new place recognition approach that combines (i) an efficient synthesis of novel views with (ii) a compact indexable image representation. Third, we introduce a new challenging dataset of 1,125 camera-phone query images of Tokyo that contain major changes in illumination (day, sunset, night) as well as structural changes in the scene. We demonstrate that the proposed approach significantly outperforms other large-scale place recognition techniques on this challenging data. <s> BIB005
To the other end, there are also increasing interests in specialized instance retrieval. Examples include place retrieval BIB005 , pedestrian retrieval BIB002 , vehicle retrieval BIB003 , logo retrieval BIB001 , etc. Images in these tasks have specific prior knowledge that can be made use of. For example in pedestrian retrieval, the recurrent neural network (RNN) can be employed to pool the body part or patch descriptors. In vehicle retrieval, the view information can be inferred during feature learning, and the license plate can also provide critical information when being captured within a short distance. Meanwhile, the process of training data collection can be further explored. For example, training images of different places can be collected via Google Street View BIB004 . Vehicle images can be accessed either through surveillance videos or internet images. Exploring new learning strategies in these specialized datasets and studying the transfer effect would be interesting. Finally, compact vectors or short codes will also become important in realistic retrieval settings.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years, resulting in large amounts of geographical and social data. Such LBSN data provide an unprecedented opportunity to study the human movement from their socio-spatial behavior, in order to improve location-based applications like location recommendation. As users can check-in at new places, traditional work on location prediction that relies on mining a user's historical moving trajectories fails as it is not designed for the cold-start problem of recommending new check-ins. While previous work on LBSNs attempting to utilize a user's social connections for location recommendation observed limited help from social network information. In this work, we propose to address the cold-start location recommendation problem by capturing the correlations between social networks and geographical distance on LBSNs with a geo-social correlation model. The experimental results on a real-world LBSN dataset demonstrate that our approach properly models the geo-social correlations of a user's cold-start check-ins and significantly improves the location recommendation performance. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> As location-based social networks (LBSNs) rapidly grow, it is a timely topic to study how to recommend users with interesting locations, known as points-of-interest (POIs). Most existing POI recommendation techniques only employ the check-in data of users in LBSNs to learn their preferences on POIs by assuming a user's check-in frequency to a POI explicitly reflects the level of her preference on the POI. However, in reality users usually visit POIs only once, so the users' check-ins may not be sufficient to derive their preferences using their check-in frequencies only. Actually, the preferences of users are exactly implied in their opinions in text-based tips commenting on POIs. In this paper, we propose an opinion-based POI recommendation framework called ORec to take full advantage of the user opinions on POIs expressed as tips. In ORec, there are two main challenges: (i) detecting the polarities of tips (positive, neutral or negative), and (ii) integrating them with check-in data including social links between users and geographical information of POIs. To address these two challenges, (1) we develop a supervised aspect-dependent approach to detect the polarity of a tip, and (2) we devise a method to fuse tip polarities with social links and geographical information into a unified POI recommendation framework. Finally, we conduct a comprehensive performance evaluation for ORec using two large-scale real data sets collected from Foursquare and Yelp. Experimental results show that ORec achieves significantly superior polarity detection and POI recommendation accuracy compared to other state-of-the-art polarity detection and POI recommendation techniques. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> With the recent surge of location based social networks (LBSNs), activity data of millions of users has become attainable. This data contains not only spatial and temporal stamps of user activity, but also its semantic information. LBSNs can help to understand mobile users’ spatial temporal activity preference (STAP), which can enable a wide range of ubiquitous applications, such as personalized context-aware location recommendation and group-oriented advertisement. However, modeling such user-specific STAP needs to tackle high-dimensional data, i.e., user-location-time-activity quadruples, which is complicated and usually suffers from a data sparsity problem. In order to address this problem, we propose a STAP model. It first models the spatial and temporal activity preference separately, and then uses a principle way to combine them for preference inference. In order to characterize the impact of spatial features on user activity preference, we propose the notion of personal functional region and related parameters to model and infer user spatial activity preference. In order to model the user temporal activity preference with sparse user activity data in LBSNs, we propose to exploit the temporal activity similarity among different users and apply nonnegative tensor factorization to collaboratively infer temporal activity preference. Finally, we put forward a context-aware fusion framework to combine the spatial and temporal activity preference models for preference inference. We evaluate our proposed approach on three real-world datasets collected from New York and Tokyo, and show that our STAP model consistently outperforms the baseline approaches in various settings. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important way of helping users discover interesting locations to increase their engagement with location-based services. Although human movement exhibits sequential patterns in LBSNs, most current studies on spatial item recommendations do not consider the sequential influence of locations. Leveraging sequential patterns in spatial item recommendation is, however, very challenging, considering 1) users' check-in data in LBSNs has a low sampling rate in both space and time, which renders existing prediction techniques on GPS trajectories ineffective; 2) the prediction space is extremely large, with millions of distinct locations as the next prediction target, which impedes the application of classical Markov chain models; and 3) there is no existing framework that unifies users' personal interests and the sequential influence in a principled manner. In light of the above challenges, we propose a sequential personalized spatial item recommendation framework (SPORE) which introduces a novel latent variable topic-region to model and fuse sequential influence with personal interests in the latent and exponential space. The advantages of modeling the sequential effect at the topic-region level include a significantly reduced prediction space, an effective alleviation of data sparsity and a direct expression of the semantic meaning of users' spatial activities. Furthermore, we design an asymmetric Locality Sensitive Hashing (ALSH) technique to speed up the online top-k recommendation process by extending the traditional LSH. We evaluate the performance of SPORE on two real datasets and one large-scale synthetic dataset. The results demonstrate a significant improvement in SPORE's ability to recommend spatial items, in terms of both effectiveness and efficiency, compared with the state-of-the-art methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive and interesting places, especially when users travel out of town. However, the extreme sparsity of a user-POI matrix creates a severe challenge. To cope with this challenge, we propose a unified probabilistic generative model, the Topic-Region Model (TRM), to simultaneously discover the semantic, temporal, and spatial patterns of users’ check-in activities, and to model their joint effect on users’ decision making for selection of POIs to visit. To demonstrate the applicability and flexibility of TRM, we investigate how it supports two recommendation scenarios in a unified way, that is, hometown recommendation and out-of-town recommendation. TRM effectively overcomes data sparsity by the complementarity and mutual enhancement of the diverse information associated with users’ check-in activities (e.g., check-in content, time, and location) in the processes of discovering heterogeneous patterns and producing recommendations. To support real-time POI recommendations, we further extend the TRM model to an online learning model, TRM-Online, to track changing user interests and speed up the model training. In addition, based on the learned model, we propose a clustering-based branch and bound algorithm (CBB) to prune the POI search space and facilitate fast retrieval of the top-k recommendations. ::: We conduct extensive experiments to evaluate the performance of our proposals on two real-world datasets, including recommendation effectiveness, overcoming the cold-start problem, recommendation efficiency, and model-training efficiency. The experimental results demonstrate the superiority of our TRM models, especially TRM-Online, compared with state-of-the-art competitive methods, by making more effective and efficient mobile recommendations. In addition, we study the importance of each type of pattern in the two recommendation scenarios, respectively, and find that exploiting temporal patterns is most important for the hometown recommendation scenario, while the semantic patterns play a dominant role in improving the recommendation effectiveness for out-of-town users. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Introduction <s> Social community detection is a growing field of interest in the area of social network applications, and many approaches have been developed, including graph partitioning, latent space model, block model and spectral clustering. Most existing work purely focuses on network structure information which is, however, often sparse, noisy and lack of interpretability. To improve the accuracy and interpretability of community discovery, we propose to infer users' social communities by incorporating their spatiotemporal data and semantic information. Technically, we propose a unified probabilistic generative model, User-Community-Geo-Topic (UCGT), to simulate the generative process of communities as a result of network proximities, spatiotemporal co-occurrences and semantic similarity. With a well-designed multi-component model structure and a parallel inference implementation to leverage the power of multicores and clusters, our UCGT model is expressive while remaining efficient and scalable to growing large-scale geo-social networking data. We deploy UCGT to two application scenarios of user behavior predictions: check-in prediction and social interaction prediction. Extensive experiments on two large-scale geo-social networking datasets show that UCGT achieves better performance than existing state-of-the-art comparison methods. <s> BIB014
Location-based social networks (LBSNs) such as Foursqaure, Facebook Places, and Yelp are popular now owning to the explosive increase of smart phones. Sharp increase of smart phones arouses prosperous online LBSNs. Until June 2016, Foursquare has collected more than 8 billion check-ins and more than 65 million place shapes mapping businesses around the world; over 55 million people in the world use the service from Foursquare each month BIB003 . LBSNs collect users' check-in information including visited locations' geographical information (latitude and longitude) and users' tips at the location. LBSNs also allow users to make friends and share information. Figure 1 demonstrates a typical LBSN, exhibiting the interactions (e.g., check-in activity) between users and POIs, and interactions (friendship) among users. In order to improve user experience in LBSNs, point-of-interest (POI) recommendation is proposed that suggests new places for users to visit from mining users' check-in records and social relationships. POI recommendation is one of the most important tasks in LBSNs, which helps users discover new interesting locations in the LBSNs. POI recommendation typically mines users' check-in records, venue information such as categories, and users' social relationships to recommend a list of POIs where users most likely check-in in the future. POI recommendation not only improves user viscosity to LBSN service providers, but also benefits advertising agencies with an effective way of launching advertisements to the potential consumers. Specifically, users can explore nearby restaurants and downtown shopping malls in Foursquare. Meanwhile, the merchants are able to make the users to easily find them through POI recommendation. Owning to the convenience to users and business opportunities for merchants, POI recommendation attracts intensive attention and a bunch of POI recommendation systems have been proposed recently BIB006 BIB007 BIB011 BIB012 BIB008 BIB013 . POI recommendation is a branch of recommendation systems, which indicates to borrow ideas for this task from conventional recommendation systems, e.g., movie recommendation. We suffice to make use of conventional recommendation system techniques, e.g., collaborative filtering methods. However, the specific fact that location concatenates the physical world and the online networking services, arouses new challenges to the traditional recommendation system techniques. We summarize some confronting challenges as follows, 1. Physical constraints: Check-in activity is limited by physical constraints, compared with shopping online from Amazon and watching movie in Netflix. For one thing, users in LBSNs check-in at geographically constrained areas; for another, shops regularly provide services in some limited time. Such physical constraints make the check-in activity in LBSN exhibit significantly spatial and temporal properties BIB009 BIB002 BIB004 BIB005 BIB001 BIB010 BIB014 ].
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Internet-based recommender systems have traditionally employed collaborative filtering techniques to deliver relevant "digital" results to users. In the mobile Internet however, recommendations typically involve "physical" entities (e.g., restaurants), requiring additional user effort for fulfillment. Thus, in addition to the inherent requirements of high scalability and low latency, we must also take into account a "convenience" metric in making recommendations. In this paper, we propose an enhanced collaborative filtering solution that uses location as a key criterion for generating recommendations. We frame the discussion in the context of our "restaurant recommender" system, and describe preliminary results that indicate the utility of such an approach. We conclude with a look at open issues in this space, and motivate a future discussion on the business impact and implications of mining the data in such systems. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The advance of location-acquisition technologies enables people to record their location histories with spatio-temporal datasets, which imply the correlation between geographical regions. This correlation indicates the relationship between locations in the space of human behavior, and can enable many valuable services, such as sales promotion and location recommendation. In this paper, by taking into account a user's travel experience and the sequentiality locations have been visited, we propose an approach to mine the correlation between locations from a large number of users' location histories. We conducted a personalized location recommendation system using the location correlation, and evaluated this system with a large-scale real-world GPS dataset. As a result, our method outperforms the related work using the Pearson correlation. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Online Social Networks (OSNs) are increasingly becoming one of the key media of communication over the Internet. The potential of these services as the basis to gather statistics and exploit information about user behavior is appealing and, as a consequence, the number of applications developed for these purposes has been soaring. At the same time, users are now willing to share information about their location, allowing for the study of the role of geographic distance in social ties. ::: ::: In this paper we present a graph analysis based approach to study social networks with geographic information and new metrics to characterize how geographic distance affects social structure. We apply our analysis to four large-scale OSN datasets: our results show that there is a vast portion of users with short-distance links and that clusters of friends are often geographically close. In addition, we demonstrate that different social networking services exhibit different geo-social properties: OSNs based mainly on location-advertising largely foster local ties and clusters, while services used mainly for news and content sharing present more connections and clusters on longer distances. The results of this work can be exploited to improve many classes of systems and a potential vast number of applications, as we illustrate by means of some practical examples. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Link prediction systems have been largely adopted to recommend new friends in online social networks using data about social interactions. With the soaring adoption of location-based social services it becomes possible to take advantage of an additional source of information: the places people visit. In this paper we study the problem of designing a link prediction system for online location-based social networks. We have gathered extensive data about one of these services, Gowalla, with periodic snapshots to capture its temporal evolution. We study the link prediction space, finding that about 30% of new links are added among "place-friends", i.e., among users who visit the same places. We show how this prediction space can be made 15 times smaller, while still 66% of future connections can be discovered. Thus, we define new prediction features based on the properties of the places visited by users which are able to discriminate potential future links among them. Building on these findings, we describe a supervised learning framework which exploits these prediction features to predict new links among friends-of-friends and place-friends. Our evaluation shows how the inclusion of information about places and related user activity offers high link prediction performance. These results open new directions for real-world link recommendation systems on location-based social networks. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The advance of GPS-enabled devices allows people to record their location histories with GPS traces, which imply human behaviors and preferences related to travel. In this article, we perform two types of travel recommendations by mining multiple users' GPS traces. The first is a generic one that recommends a user with top interesting locations and travel sequences in a given geospatial region. The second is a personalized recommendation that provides an individual with locations matching her travel preferences. To achieve the first recommendation, we model multiple users' location histories with a tree-based hierarchical graph (TBHG). Based on the TBHG, we propose a HITS (Hypertext Induced Topic Search)-based model to infer the interest level of a location and a user's travel experience (knowledge). In the personalized recommendation, we first understand the correlation between locations, and then incorporate this correlation into a collaborative filtering (CF)-based model, which predicts a user's interests in an unvisited location based on her locations histories and that of others. We evaluated our system based on a real-world GPS trace dataset collected by 107 users over a period of one year. As a result, our HITS-based inference model outperformed baseline approaches like rank-by-count and rank-by-frequency. Meanwhile, we achieved a better performance in recommending travel sequences beyond baselines like rank-by-count. Regarding the personalized recommendation, our approach is more effective than the weighted Slope One algorithm with a slightly additional computation, and is more efficient than the Pearson correlation-based CF model with the similar effectiveness. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The development of a city gradually fosters different functional regions, such as educational areas and business districts. In this paper, we propose a framework (titled DRoF) that Discovers Regions of different Functions in a city using both human mobility among regions and points of interests (POIs) located in a region. Specifically, we segment a city into disjointed regions according to major roads, such as highways and urban express ways. We infer the functions of each region using a topic-based inference model, which regards a region as a document, a function as a topic, categories of POIs (e.g., restaurants and shopping malls) as metadata (like authors, affiliations, and key words), and human mobility patterns (when people reach/leave a region and where people come from and leave for) as words. As a result, a region is represented by a distribution of functions, and a function is featured by a distribution of mobility patterns. We further identify the intensity of each function in different locations. The results generated by our framework can benefit a variety of applications, including urban planning, location choosing for a business, and social recommendations. We evaluated our method using large-scale and real-world datasets, consisting of two POI datasets of Beijing (in 2010 and 2011) and two 3-month GPS trajectory datasets (representing human mobility) generated by over 12,000 taxicabs in Beijing in 2010 and 2011 respectively. The results justify the advantages of our approach over baseline methods solely using POIs or human mobility. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> In this paper, we propose a new model to integrate additional data, which is obtained from geospatial resources other than original data set in order to improve Location/Activity recommendations. The data set that is used in this work is a GPS trajectory of some users, which is gathered over 2 years. In order to have more accurate predictions and recommendations, we present a model that injects additional information to the main data set and we aim to apply a mathematical method on the merged data. On the merged data set, singular value decomposition technique is applied to extract latent relations. Several tests have been conducted, and the results of our proposed method are compared with a similar work for the same data set. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Mobile location-based services are thriving, providing an unprecedented opportunity to collect fine grained spatio-temporal data about the places users visit. This multi-dimensional source of data offers new possibilities to tackle established research problems on human mobility, but it also opens avenues for the development of novel mobile applications and services. In this work we study the problem of predicting the next venue a mobile user will visit, by exploring the predictive power offered by different facets of user behavior. We first analyze about 35 million check-ins made by about 1 million Foursquare users in over 5 million venues across the globe, spanning a period of five months. We then propose a set of features that aim to capture the factors that may drive users' movements. Our features exploit information on transitions between types of places, mobility flows between venues, and spatio-temporal characteristics of user check-in patterns. We further extend our study combining all individual features in two supervised learning models, based on linear regression and M5 model trees, resulting in a higher overall prediction accuracy. We find that the supervised methodology based on the combination of multiple features offers the highest levels of prediction accuracy: M5 model trees are able to rank in the top fifty venues one in two user check-ins, amongst thousands of candidate items in the prediction list. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The wide spread use of location based social networks (LBSNs) has enabled the opportunities for better location based services through Point-of-Interest (POI) recommendation. Indeed, the problem of POI recommendation is to provide personalized recommendations of places of interest. Unlike traditional recommendation tasks, POI recommendation is personalized, locationaware, and context depended. In light of this difference, this paper proposes a topic and location aware POI recommender system by exploiting associated textual and context information. Specifically, we first exploit an aggregated latent Dirichlet allocation (LDA) model to learn the interest topics of users and to infer the interest POIs by mining textual information associated with POIs. Then, a Topic and Location-aware probabilistic matrix factorization (TL-PMF) method is proposed for POI recommendation. A unique perspective of TL-PMF is to consider both the extent to which a user interest matches the POI in terms of topic distribution and the word-of-mouth opinions of the POIs. Finally, experiments on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art probabilistic latent factor models with a significant margin. Also, we have studied the impact of personalized interest topics and word-of-mouth opinions on POI recommendations. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> ocation-based social networks (LBSNs) are one kind of online social networks offering geographic services and have been attracting much attention in recent years. LBSNs usually have complex structures, involving heterogeneous nodes and links. Many recommendation services in LBSNs (e.g., friend and location recommendation) can be cast as link prediction problems (e.g., social link and location link prediction). Traditional link prediction researches on LBSNs mostly focus on predicting either social links or location links, assuming the prediction tasks of different types of links to be independent. However, in many real-world LBSNs, the prediction tasks for social links and location links are strongly correlated and mutually influential. Another key challenge in link prediction on LBSNs is the data sparsity problem (i.e., "new network" problem), which can be encountered when LBSNs branch into new geographic areas or social groups. Actually, nowadays, many users are involved in multiple networks simultaneously and users who just join one LBSN may have been using other LBSNs for a long time. In this paper, we study the problem of predicting multiple types of links simultaneously for a new LBSN across partially aligned LBSNs and propose a novel method TRAIL (TRAnsfer heterogeneous lInks across LBSNs). TRAIL can accumulate information for locations from online posts and extract heterogeneous features for both social links and location links. TRAIL can predict multiple types of links simultaneously. In addition, TRAIL can transfer information from other aligned networks to the new network to solve the problem of lacking information. Extensive experiments conducted on two real-world aligned LBSNs show that TRAIL can achieve very good performance and substantially outperform the baseline methods. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The availability of user check-in data in large volume from the rapid growing location-based social networks (LBSNs) enables a number of important location-aware services. Point-of-interest (POI) recommendation is one of such services, which is to recommend POIs that users have not visited before. It has been observed that: (i) users tend to visit nearby places, and (ii) users tend to visit different places in different time slots, and in the same time slot, users tend to periodically visit the same places. For example, users usually visit a restaurant during lunch hours, and visit a pub at night. In this paper, we focus on the problem of time-aware POI recommendation, which aims at recommending a list of POIs for a user to visit at a given time. To exploit both geographical and temporal influences in time aware POI recommendation, we propose the Geographical-Temporal influences Aware Graph (GTAG) to model check-in records, geographical influence and temporal influence. For effective and efficient recommendation based on GTAG, we develop a preference propagation algorithm named Breadth first Preference Propagation (BPP). The algorithm follows a relaxed breath-first search strategy, and returns recommendation results within at most 6 propagation steps. Our experimental results on two real-world datasets show that the proposed graph-based approach outperforms state-of-the-art POI recommendation methods substantially. <s> BIB014 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Social media provides valuable resources to analyze user behaviors and capture user preferences. This article focuses on analyzing user behaviors in social media systems and designing a latent class statistical mixture model, named temporal context-aware mixture model (TCAM), to account for the intentions and preferences behind user behaviors. Based on the observation that the behaviors of a user in social media systems are generally influenced by intrinsic interest as well as the temporal context (e.g., the public's attention at that time), TCAM simultaneously models the topics related to users' intrinsic interests and the topics related to temporal context and then combines the influences from the two factors to model user behaviors in a unified way. Considering that users' interests are not always stable and may change over time, we extend TCAM to a dynamic temporal context-aware mixture model (DTCAM) to capture users' changing interests. To alleviate the problem of data sparsity, we exploit the social and temporal correlation information by integrating a social-temporal regularization framework into the DTCAM model. To further improve the performance of our proposed models (TCAM and DTCAM), an item-weighting scheme is proposed to enable them to favor items that better represent topics related to user interests and topics related to temporal context, respectively. Based on our proposed models, we design a temporal context-aware recommender system (TCARS). To speed up the process of producing the top-k recommendations from large-scale social media data, we develop an efficient query-processing technique to support TCARS. Extensive experiments have been conducted to evaluate the performance of our models on four real-world datasets crawled from different social media sites. The experimental results demonstrate the superiority of our models, compared with the state-of-the-art competitor methods, by modeling user behaviors more precisely and making more effective and efficient recommendations. <s> BIB015 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> A point of interest (POI) is a specific location that people may find useful or interesting. Examples include restaurants, stores, attractions, and hotels. With recent proliferation of location-based social networks (LBSNs), numerous users are gathered to share information on various POIs and to interact with each other. POI recommendation is then a crucial issue because it not only helps users to explore potential places but also gives LBSN providers a chance to post POI advertisements. As we utilize a heterogeneous information network to represent a LBSN in this work, POI recommendation is remodeled as a link prediction problem, which is significant in the field of social network analysis. Moreover, we propose to utilize the meta-path-based approach to extract implicit (but potentially useful) relationships between a user and a POI. Then, the extracted topological features are used to construct a prediction model with appropriate data classification techniques. In our experimental studies, the Yelp dataset is utilized as our testbed for performance evaluation purposes. Results of the experiments show that our prediction model is of good prediction quality in practical applications. <s> BIB016 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> Mobility prediction enables appealing proactive experiences for location-aware services and offers essential intelligence to business and governments. Recent studies suggest that human mobility is highly regular and predictable. Additionally, social conformity theory indicates that people's movements are influenced by others. However, existing approaches for location prediction fail to organically combine both the regularity and conformity of human mobility in a unified model, and lack the capacity to incorporate heterogeneous mobility datasets to boost prediction performance. To address these challenges, in this paper we propose a hybrid predictive model integrating both the regularity and conformity of human mobility as well as their mutual reinforcement. In addition, we further elevate the predictive power of our model by learning location profiles from heterogeneous mobility datasets based on a gravity model. We evaluate the proposed model using several city-scale mobility datasets including location check-ins, GPS trajectories of taxis, and public transit data. The experimental results validate that our model significantly outperforms state-of-the-art approaches for mobility prediction in terms of multiple metrics such as accuracy and percentile rank. The results also suggest that the predictability of human mobility is time-varying, e.g., the overall predictability is higher on workdays than holidays while predicting users' unvisited locations is more challenging for workdays than holidays. <s> BIB017 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places, such as restaurants and movie theaters. The increasing prevalence of mobile devices and of location based social networks (LBSNs) poses significant new opportunities as well as challenges, which we address. The decision process for a user to choose a POI is complex and can be influenced by numerous factors, such as personal preferences, geographical considerations, and user mobility behaviors. This is further complicated by the connection LBSNs and mobile devices. While there are some studies on POI recommendations, they lack an integrated analysis of the joint effect of multiple factors. Meanwhile, although latent factor models have been proved effective and are thus widely used for recommendations, adopting them to POI recommendations requires delicate consideration of the unique characteristics of LBSNs. To this end, in this paper, we propose a general geographical probabilistic factor model ( $\sf{Geo}$ -PFM) framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user’s check-in behavior. Also, user mobility behaviors can be effectively leveraged in the recommendation model. Moreover, based our $\sf{Geo}$ -PFM framework, we further develop a Poisson $\sf{Geo}$ -PFM which provides a more rigorous probabilistic generative process for the entire model and is effective in modeling the skewed user check-in count data as implicit feedback for better POI recommendations. Finally, extensive experimental results on three real-world LBSN datasets (which differ in terms of user mobility, POI geographical distribution, implicit response data skewness, and user-POI observation sparsity), show that the proposed recommendation methods outperform state-of-the-art latent factor models by a significant margin. <s> BIB018 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> While on the go, people are using their phones as a personal concierge discovering what is around and deciding what to do. Mobile phone has become a recommendation terminal customized for individuals—capable of recommending activities and simplifying the accomplishment of related tasks. In this article, we conduct usage mining on the check-in data, with summarized statistics identifying the local recommendation challenges of huge solution space, sparse available data, and complicated user intent, and discovered observations to motivate the hierarchical, contextual, and sequential solution. We present a point-of-interest (POI) category-transition--based approach, with a goal of estimating the visiting probability of a series of successive POIs conditioned on current user context and sensor context. A mobile local recommendation demo application is deployed. The objective and subjective evaluations validate the effectiveness in providing mobile users both accurate recommendation and favorable user experience. <s> BIB019 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Complex relations: For online social media services such as Twitter and <s> With the rapid development of mobile devices, global position system (GPS) and Web 2.0 technologies, location-based social networks (LBSNs) have attracted millions of users to share rich information, such as experiences and tips. Point-of-Interest (POI) recommender system plays an important role in LBSNs since it can help users explore attractive locations as well as help social network service providers design location-aware advertisements for Point-of-Interest. In this paper, we present a brief survey over the task of Point-of-Interest recommendation in LBSNs and discuss some research directions for Point-of-Interest recommendation. We first describe the unique characteristics of Point-of-Interest recommendation, which distinguish Point-of-Interest recommendation approaches from traditional recommendation approaches. Then, according to what type of additional information are integrated with check-in data by POI recommendation algorithms, we classify POI recommendation algorithms into four categories: pure check-in data based POI recommendation approaches, geographical influence enhanced POI recommendation approaches, social influence enhanced POI recommendation approaches and temporal influence enhanced POI recommendation approaches. Finally, we discuss future research directions for Point-of-Interest recommendation. <s> BIB020
Facebook, location is a new object, which yields new relation between locations BIB007 , between users and locations BIB008 BIB009 BIB015 . In addition, location sharing activities alter relations between users since people are apt to make new friends with geographical neighbors BIB003 BIB005 . 3. Heterogeneous information: LNSNs consist of different kinds of information, including not only check-in records, the geographical information of locations, and venue descriptions but also users' social relation information and media information (e.g., user comments and tweets). The heterogeneous information depicts the user activity from a variety of perspectives BIB016 BIB017 BIB012 , inspiring POI recommendation systems of different kinds BIB018 BIB013 BIB011 BIB010 BIB019 BIB014 . A bunch of researches are carried out to address this significant but challenging problem-POI recommendation. Ye et al. BIB004 first propose POI recommendation for LBSNs such as Foursquare and Gowalla. After that, more than 50 papers about the problem are published in top conferences and journals, including SIGKDD, SIGIR, IJCAI, AAAI, WWW, CIKM, ICDM, RecSys, TIST, TKDE, TIST, and so on so forth. Table 1 shows the statistics on the literature. Some similar researches with POI recommendation, such as restaurant recommendation system BIB001 or location recommendation from GPS trajectories BIB002 BIB006 BIB009 , base on the other types of data, beyond our scope. In this survey, we focus on the POI recommendation for LBSNs. We surpass the latest survey BIB020 in this field in depth and scope: 1) Yu et al. BIB020 only categorize the POI recommendation according to the influential factors, while, we show the taxonomies from three perspectives. 2) We incorporate more researches, especially systems established on joint models and some recently published papers. 3) We show the trends and new directions in this field. We follow the scheme shown in Fig. 2 to reveal academic progress in the area of POI recommendation. We categorize the POI recommendation systems Table 1 Statistics on the literature Name 2010 2011 2012 2013 2014 2015 2016 Conference AAAI 1 1 3 IJCAI 1 1 1 ICDE 2 ICDM 1 1 2 WWW 1 KDD 1 2 1 1 2 SIGIR 1 1 4 SIGSPATIAL 1 1 2 1 CIKM 1 1 2 in three aspects: influential factors, methodology, and task. More specifically, we discuss four types of influential factors: geographical influence, social influence, temporal influence, and content indications. In addition, we categorize the methodologies for POI recommendation as fused models and joint models. Moreover, we categorize POI recommendation systems as general POI recommendation and successive POI recommendation according to the sub-tle difference in task whether to be inclined to the recent check-in. To report these contents, we organize the remain of this paper as follows. Section 2 reports the problem definition. Section 3 demonstrates the influential factors for POI recommendation. Next, Section 4 and 5 show the POI recommendation systems categorized by methodology and task, respectively. Then, Section 6 introduces data sources and metrics for system performance evaluation. Further, Section 7 points out the trends and new directions in the POI recommendation area. Finally, Section 8 draws the conclusion of this paper.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> The website wheresgeorge.com invites its users to enter the serial numbers of their US dollar bills and track them across America and beyond. Why? “For fun and because it had not been done yet”, they say. But the dataset accumulated since December 1998 has provided the ideal raw material to test the mathematical laws underlying human travel, and that has important implications for the epidemiology of infectious diseases. Analysis of the trajectories of over half a million dollar bills shows that human dispersal is described by a ‘two-parameter continuous-time random walk’ model: our travel habits conform to a type of random proliferation known as ‘superdiffusion’. And with that much established, it should soon be possible to develop a new class of models to account for the spread of human disease. The dynamic spatial redistribution of individuals is a key driving force of various spatiotemporal phenomena on geographical scales. It can synchronize populations of interacting species, stabilize them, and diversify gene pools1,2,3. Human travel, for example, is responsible for the geographical spread of human infectious disease4,5,6,7,8,9. In the light of increasing international trade, intensified human mobility and the imminent threat of an influenza A epidemic10, the knowledge of dynamical and statistical properties of human travel is of fundamental importance. Despite its crucial role, a quantitative assessment of these properties on geographical scales remains elusive, and the assumption that humans disperse diffusively still prevails in models. Here we report on a solid and quantitative assessment of human travelling statistics by analysing the circulation of bank notes in the United States. Using a comprehensive data set of over a million individual displacements, we find that dispersal is anomalous in two ways. First, the distribution of travelling distances decays as a power law, indicating that trajectories of bank notes are reminiscent of scale-free random walks known as Levy flights. Second, the probability of remaining in a small, spatially confined region for a time T is dominated by algebraically long tails that attenuate the superdiffusive spread. We show that human travelling behaviour can be described mathematically on many spatiotemporal scales by a two-parameter continuous-time random walk model to a surprising accuracy, and conclude that human travel on geographical scales is an ambivalent and effectively superdiffusive process. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Despite their importance for urban planning, traffic forecasting and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited owing to the lack of tools to monitor the time-resolved location of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six-month period. We find that, in contrast with the random trajectories predicted by the prevailing Lévy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time-independent characteristic travel distance and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that, despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent-based modelling. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> We report that human walks performed in outdoor settings of tens of kilometers resemble a truncated form of Levy walks commonly observed in animals such as monkeys, birds and jackals. Our study is based on about one thousand hours of GPS traces involving 44 volunteers in various outdoor settings including two different college campuses, a metropolitan area, a theme park and a state fair. This paper shows that many statistical features of human walks follow truncated power-law, showing evidence of scale-freedom and do not conform to the central limit theorem. These traits are similar to those of Levy walks. It is conjectured that the truncation, which makes the mobility deviate from pure Levy walks, comes from geographical constraints including walk boundary, physical obstructions and traffic. None of commonly used mobility models for mobile networks captures these properties. Based on these findings, we construct a simple Levy walk mobility model which is versatile enough in emulating diverse statistical patterns of human walks observed in our traces. The model is also used to recreate similar power-law inter-contact time distributions observed in previous human mobility studies. Our network simulation indicates that the Levy walk features are important in characterizing the performance of mobile network routing performance. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> With the rapidly growing location-based social networks (LBSNs), personalized geo-social recommendation becomes an important feature for LBSNs. Personalized geo-social recommendation not only helps users explore new places but also makes LBSNs more prevalent to users. In LBSNs, aside from user preference and social influence, geographical influence has also been intensively exploited in the process of location recommendation based on the fact that geographical proximity significantly affects users' check-in behaviors. Although geographical influence on users should be personalized, current studies only model the geographical influence on all users' check-in behaviors in a universal way. In this paper, we propose a new framework called iGSLR to exploit personalized social and geographical influence on location recommendation. iGSLR uses a kernel density estimation approach to personalize the geographical influence on users' check-in behaviors as individual distributions rather than a universal distribution for all users. Furthermore, user preference, social influence, and personalized geographical influence are integrated into a unified geo-social recommendation framework. We conduct a comprehensive performance evaluation for iGSLR using two large-scale real data sets collected from Foursquare and Gowalla which are two of the most popular LBSNs. Experimental results show that iGSLR provides significantly superior location recommendation compared to other state-of-the-art geo-social recommendation techniques. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Point-of-Interest POI recommendation is a significant service for location-based social networks LBSNs. It recommends new places such as clubs, restaurants, and coffee bars to users. Whether recommended locations meet users' interests depends on three factors: user preference, social influence, and geographical influence. Hence extracting the information from users' check-in records is the key to POI recommendation in LBSNs. Capturing user preference and social influence is relatively easy since it is analogical to the methods in a movie recommender system. However, it is a new topic to capture geographical influence. Previous studies indicate that check-in locations disperse around several centers and we are able to employ Gaussian distribution based models to approximate users' check-in behaviors. Yet centers discovering methods are dissatisfactory. In this paper, we propose two models--Gaussian mixture model GMM and genetic algorithm based Gaussian mixture model GA-GMM to capture geographical influence. More specifically, we exploit GMM to automatically learn users' activity centers; further we utilize GA-GMM to improve GMM by eliminating outliers. Experimental results on a real-world LBSN dataset show that GMM beats several popular geographical capturing models in terms of POI recommendation, while GA-GMM excludes the effect of outliers and enhances GMM. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Geographical characteristics derived from the historical check-in data have been reported effective in improving location recommendation accuracy. However, previous studies mainly exploit geographical characteristics from a user's perspective, via modeling the geographical distribution of each individual user's check-ins. In this paper, we are interested in exploiting geographical characteristics from a location perspective, by modeling the geographical neighborhood of a location. The neighborhood is modeled at two levels: the instance-level neighborhood defined by a few nearest neighbors of the location, and the region-level neighborhood for the geographical region where the location exists. We propose a novel recommendation approach, namely Instance-Region Neighborhood Matrix Factorization (IRenMF), which exploits two levels of geographical neighborhood characteristics: a) instance-level characteristics, i.e., nearest neighboring locations tend to share more similar user preferences; and b) region-level characteristics, i.e., locations in the same geographical region may share similar user preferences. In IRenMF, the two levels of geographical characteristics are naturally incorporated into the learning of latent features of users and locations, so that IRenMF predicts users' preferences on locations more accurately. Extensive experiments on the real data collected from Gowalla, a popular LBSN, demonstrate the effectiveness and advantages of our approach. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Geographical Influence <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB014
Geographical influence is an important factor that distinguishes the POI recommendation from traditional item recommendation, because the check-in behavior depends on locations' geographical features. Analysis on users' check-in data show that, a user acts in geographically constrained areas and prefers to visiting POIs nearby those where the user has checked-in. Several studies BIB008 BIB012 BIB013 BIB005 BIB009 BIB010 BIB014 BIB011 attempt to employ the geographical influence to improve POI recommendation systems. In particular, three representative models, i.e., power law distribution model, Gaussian distribution model, and kernel density estimation model, are proposed to capture the geographical influence in POI recommendation. Fig. 5 Power law distribution pattern BIB005 In BIB005 , Ye et al. employ a power law distribution model to capture the geographical influence. Power law distribution pattern has been observed in human mobility such as withdraw activities in ATMs and travel in different cities BIB001 BIB002 BIB003 . Also, Ye et al. discover similar pattern in users' check-in activity in LBSNs BIB004 BIB005 . Figure 5 demonstrates two POIs' co-occurrence probability distribution over distance between two POIs. Because of the power law distribution in Figure 5 , we are able to model the geographical influence as follows. The co-occurrence probability y of two POIs by the same user can be formulated as follows, where x denotes the distance between two POIs, a and b are parameters of the power-law distribution. Here, a and b should be learned from the observed check-in data, depicting the geographical feature of the check-in activity. A standard way to learn the parameters, a and b, is to transform Eq. (1) to a linear equation via a logarithmic operation, and learn the parameters by fitting a linear regression problem. On basis of the geographical influence model depicted through the power law distribution, new POIs can be suggested according to the following formula. Given a past checked-in POI set L i , the probability of visiting POI l j for user u i , is formulated as, where d(l j , l y ) denotes the distance between POI l j and l y , and In BIB004 BIB005 , Ye et al. leverage the power law distribution to model the geographical influence and combine it with collaborative filtering techniques BIB006 to recommend POIs. In addition, Yuan et al. BIB009 also adopt the power law distribution model, but learn the parameter using a Bayesian rule instead. Fig. 6 Check-in distribution in multi-centers BIB007 The second type to model the geographical influence is a series of Gaussian distribution based methods. Cho et al. BIB007 observe that users in LBSNs always act round some activity centers, e.g., home and office, as shown in Fig. 6 . Further, Cheng et al. BIB008 propose a Multi-center Gaussian Model (MGM) to capture the geographical influence for POI recommendation. Given the multicenter set C u , the probability of visiting POI l by user u is defined by where is the probability of the POI l belonging to the center c u , denotes the normalized effect of the check-in frequency on the center c u and parameter α maintains the frequency aversion property, N (l|µ Cu , Cu ) is the probability density function of Gaussian distribution with mean µ Cu and covariance matrix Cu . Specifically, the MGM employs a greedy clustering algorithm on the check-in data to find the user activity centers. That may result in unbalanced assignment of POIs to different activity centers. Hence, Zhao et al. BIB011 propose a genetic-based Gaussian mixture model to capture the geographical influence, which outperforms the MGM in POI recommendation. Fig. 7 Distributions of personal check-in locations BIB010 The third type of geographical model is the kernel density estimation (KDE) model. In order to mine the personalized geographical influence, Zhang et al. BIB010 argue that the geographical influence on each individual user should be personalized rather than modeling though a common distribution, e.g., pow law distribution BIB005 and MGM BIB008 . As shown in Fig. 7 , it is hard to model different users using the same distribution. To this end, they leverage kernel density estimation to model the geographical influence using a personalized distance distribution for each user. Specifically, the kernel density estimation model consists of two steps: distance sample collection and distance distribution estimation. The step of distance sample collection generates a sample X u for a user by computing the distance between every pair of locations visited by the user. Then, the distance distribution can be estimated through the probability density function f over distance d, where σ is a smoothing parameter, called the bandwidth. Denote L u = {l 1 , l 2 , . . . , l n } as the visited locations of user u. The probability of user u visiting a new POI l j given the checked-in POI set L u is defined as, where d ij is the distance between l i and l j , f (·) is the distance distribution function in Eq. (4).
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Data sparsity, scalability and prediction quality have been recognized as the three most crucial challenges that every collaborative filtering algorithm or recommender system confronts. Many existing approaches to recommender systems can neither handle very large datasets nor easily deal with users who have made very few ratings or even none at all. Moreover, traditional recommender systems assume that all the users are independent and identically distributed; this assumption ignores the social interactions or connections among users. In view of the exponential growth of information generated by online social networks, social network analysis is becoming important for many Web applications. Following the intuition that a person's social network will affect personal behaviors on the Web, this paper proposes a factor analysis approach based on probabilistic matrix factorization to solve the data sparsity and poor prediction accuracy problems by employing both users' social network information and rating records. The complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations, while the experimental results shows that our method performs much better than the state-of-the-art approaches, especially in the circumstance that users have made few or no ratings. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommender systems are becoming tools of choice to select the online information relevant to a given user. Collaborative filtering is the most popular approach to building recommender systems and has been successfully employed in many applications. With the advent of online social networks, the social network based approach to recommendation has emerged. This approach assumes a social network among users and makes recommendations for a user based on the ratings of the users that have direct or indirect social relations with the given user. As one of their major benefits, social network based approaches have been shown to reduce the problems with cold start users. In this paper, we explore a model-based approach for recommendation in social networks, employing matrix factorization techniques. Advancing previous work, we incorporate the mechanism of trust propagation into the model. Trust propagation has been shown to be a crucial phenomenon in the social sciences, in social network analysis and in trust-based recommendation. We have conducted experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset that we have recently crawled from Flixster.com. Our experiments demonstrate that modeling trust propagation leads to a substantial increase in recommendation accuracy, in particular for cold start users. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Although Recommender Systems have been comprehensively analyzed in the past decade, the study of social-based recommender systems just started. In this paper, aiming at providing a general method for improving recommender systems by incorporating social network information, we propose a matrix factorization framework with social regularization. The contributions of this paper are four-fold: (1) We elaborate how social network information can benefit recommender systems; (2) We interpret the differences between social-based recommender systems and trust-aware recommender systems; (3) We coin the term Social Regularization to represent the social constraints on recommender systems, and we systematically illustrate how to design a matrix factorization objective function with social regularization; and (4) The proposed method is quite general, which can be easily extended to incorporate other contextual information, like social tags, etc. The empirical analysis on two large datasets demonstrates that our approaches outperform other state-of-the-art methods. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Social Influence <s> The emergence of Location-based Social Network (LBSN) services provides a wonderful opportunity to build personalized Point-of-Interest (POI) recommender systems. Although a personalized POI recommender system can significantly facilitate users' outdoor activities, it faces many challenging problems, such as the hardness to model user's POI decision making process and the difficulty to address data sparsity and user/location cold-start problem. To cope with these challenges, we define three types of friends (i.e., social friends, location friends, and neighboring friends) in LBSN, and develop a two-step framework to leverage the information of friends to improve POI recommendation accuracy and address cold-start problem. Specifically, we first propose to learn a set of potential locations that each individual's friends have checked-in before and this individual is most interested in. Then we incorporate three types of check-ins (i.e., observed check-ins, potential check-ins and other unobserved check-ins) into matrix factorization model using two different loss functions (i.e., the square error based loss and the ranking error based loss). To evaluate the proposed model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results demonstrate the effectiveness of our methods. <s> BIB014
Inspired by the assumption that friends in LBSNs share more common interests than non-friends, social influence is explored to enhance POI recommendation BIB008 BIB009 BIB010 BIB014 BIB005 BIB011 BIB013 BIB012 . In fact, employing social influence to enhance recommendation systems has been explored in traditional recommendation systems, both in memory-based methods BIB004 BIB001 and model-based methods BIB006 BIB003 BIB007 . Researchers borrow the ideas from traditional recommendation systems to POI recommendation. In the following, we demonstrate representative researches capturing social influence in two aspects: memory-based and model-based. Ye et al. BIB005 propose a memory-based model, friend-based collaborative filtering (FCF) approach for POI recommendation. FCF model constrains the user-based collaborative filtering to find top similar users in friends rather than all users of LBSNs. Hence, the preference r ij of user u i at l j is calculated as follows, where F i is the set of friends with top-n similarity, w ik is similarity weight between u i and u k . FCF enhances the efficiency by reducing the computation cost of finding top similar users. However, it overlooks the non-friends who share many common check-ins with the target user. Experimental results show that FCF brings very limited improvements over user-based POI recommendation in terms of precision. Cheng et al. BIB008 apply the probabilistic matrix factorization with social regularization (PMFSR) BIB007 in POI recommendation, which integrates social influence into PMF BIB002 . Denote U and L are the set of users and POIs, respectively. PMFSR learns the latent features of users and POIs by minimizing the following objective function arg min where U i , U f , and L j are the latent features of user u i , u f , and POI l j respectively, I ij is an indicator denoting user u i has checked-in POI l j , F i is the set of user u i 's friends, sim(i, f ) denotes the social weight of user u i and u f , and g(·) is the sigmoid function to mapping the check-in frequency value c ij into the range of [0,1]. In this framework, the social influence is incorporated by the social constraints that ensure latent features of friends keep in close distance at the latent subspace. Due to its validity, Yang et al. BIB011 also employ the same framework to their sentiment-aware POI recommendation. Fig. 8 The significance of social influence on POI recommendation BIB010 Although social influence improves traditional recommendation system significantly BIB006 BIB003 BIB007 , the social influence on POI recommendation shows limited improvements BIB008 BIB010 BIB005 . Figure 8 shows the limited improvement achieved from social influence in BIB010 . Why this happens can be explained as follows. Users in LBSNs make friends online without any limitation; on the contrary, the check-in activity requires physical interactions between users and POIs. Hence, friends in LBSNs may share common interest but may not visit common locations. For instance, friends in favour of Italian food from different cities will visit their own local Italian food restaurants. This phenomenon differs from the online movie and music recommendation scenarios such as Netflix and Spotify.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Recommender systems are an important component of many websites. Two of the most popular approaches are based on matrix factorization (MF) and Markov chains (MC). MF methods learn the general taste of a user by factorizing the matrix over observed user-item preferences. On the other hand, MC methods model sequential behavior by learning a transition graph over items that is used to predict the next action based on the recent actions of a user. In this paper, we present a method bringing both approaches together. Our method is based on personalized transition graphs over underlying Markov chains. That means for each user an own transition matrix is learned - thus in total the method uses a transition cube. As the observations for estimating the transitions are usually very limited, our method factorizes the transition cube with a pairwise interaction model which is a special case of the Tucker Decomposition. We show that our factorized personalized MC (FPMC) model subsumes both a common Markov chain and the normal matrix factorization model. For learning the model parameters, we introduce an adaption of the Bayesian Personalized Ranking (BPR) framework for sequential basket data. Empirically, we show that our FPMC model outperforms both the common matrix factorization and the unpersonalized MC model both learned with and without factorization. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Real-world relational data are seldom stationary, yet traditional collaborative filtering algorithms generally rely on this assumption. Motivated by our sales prediction problem, we propose a factor-based algorithm that is able to take time into account. By introducing additional factors for time, we formalize this problem as a tensor factorization with a special constraint on the time dimension. Further, we provide a fully Bayesian treatment to avoid tuning parameters and achieve automatic model complexity control. To learn the model we develop an efficient sampling procedure that is capable of analyzing large-scale data sets. This new algorithm, called Bayesian Probabilistic Tensor Factorization (BPTF), is evaluated on several real-world problems including sales prediction and movie recommendation. Empirical results demonstrate the superiority of our temporal model. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The exponential growth of Web service makes building high-quality service-oriented applications an urgent and crucial research problem. User-side QoS evaluations of Web services are critical for selecting the optimal Web service from a set of functionally equivalent service candidates. Since QoS performance of Web services is highly related to the service status and network environments which are variable against time, service invocations are required at different instances during a long time interval for making accurate Web service QoS evaluation. However, invoking a huge number of Web services from user-side for quality evaluation purpose is time-consuming, resource-consuming, and sometimes even impractical (e.g., service invocations are charged by service providers). To address this critical challenge, this paper proposes a Web service QoS prediction framework, called WSPred, to provide time-aware personalized QoS value prediction service for different service users. WSPred requires no additional invocation of Web services. Based on the past Web service usage experience from different service users, WSPred builds feature models and employs these models to make personalized QoS prediction for different users. The extensive experimental results show the effectiveness and efficiency of WSPred. Moreover, we publicly release our real-world time-aware Web service QoS dataset for future research, which makes our experiments verifiable and reproducible. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, \eg visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Providing location recommendations becomes an important feature for location-based social networks (LBSNs), since it helps users explore new places and makes LBSNs more prevalent to users. In LBSNs, geographical influence and social influence have been intensively used in location recommendations based on the facts that geographical proximity of locations significantly affects users' check-in behaviors and social friends often have common interests. Although human movement exhibits sequential patterns, most current studies on location recommendations do not consider any sequential influence of locations on users' check-in behaviors. In this paper, we propose a new approach called LORE to exploit sequential influence on location recommendations. First, LORE incrementally mines sequential patterns from location sequences and represents the sequential patterns as a dynamic Location-Location Transition Graph (L2TG). LORE then predicts the probability of a user visiting a location by Additive Markov Chain (AMC) with L2TG. Finally, LORE fuses sequential influence with geographical influence and social influence into a unified recommendation framework; in particular the geographical influence is modeled as two-dimensional check-in probability distributions rather than one-dimensional distance probability distributions in existing works. We conduct a comprehensive performance evaluation for LORE using two large-scale real data sets collected from Foursquare and Gowalla. Experimental results show that LORE achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> In location-based social networks (LBSNs), time significantly affects users’ check-in behaviors, for example, people usually visit different places at different times of weekdays and weekends, e.g., restaurants at noon on weekdays and bars at midnight on weekends. Current studies use the temporal influence to recommend locations through dividing users’ check-in locations into time slots based on their check-in time and learning their preferences to locations in each time slot separately. Unfortunately, these studies generally suffer from two major limitations: (1) the loss of time information because of dividing a day into time slots and (2) the lack of temporal influence correlations due to modeling users’ preferences to locations for each time slot separately. In this paper, we propose a probabilistic framework called TICRec that utilizes temporal influence correlations (TIC) of both weekdays and weekends for time-aware location recommendations. TICRec not only recommends locations to users, but it also suggests when a user should visit a recommended location. In TICRec, we estimate a time probability density of a user visiting a new location without splitting the continuous time into discrete time slots to avoid the time information loss. To leverage the TIC, TICRec considers both user-based TIC (i.e., different users’ check-in behaviors to the same location at different times ) and location-based TIC (i.e., the same user's check-in behaviors to different locations at different times ). Finally, we conduct a comprehensive performance evaluation for TICRec using two real data sets collected from Foursquare and Gowalla. Experimental results show that TICRec achieves significantly superior location recommendations compared to other state-of-the-art recommendation techniques with temporal influence. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods. <s> BIB014 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Temporal Influence <s> Point-of-interest (POI) recommendation, which helps mobile users explore new places, has become an important location-based service. Existing approaches for POI recommendation have been mainly focused on exploiting the information about user preferences, social influence, and geographical influence. However, these approaches cannot handle the scenario where users are expecting to have POI recommendation for a specific time period. To this end, in this paper, we propose a unified recommender system, named the 'Where and When to gO' (WWO) recommender system, to integrate the user interests and their evolving sequential preferences with temporal interval assessment. As a result, the WWO system can make recommendations dynamically for a specific time period and the traditional POI recommender system can be treated as the special case of the WWO system by setting this time period long enough. Specifically, to quantify users' sequential preferences, we consider the distributions of the temporal intervals between dependent POIs in the historical check-in sequences. Then, to estimate the distributions with only sparse observations, we develop the low-rank graph construction model, which identifies a set of bi-weighted graph bases so as to learn the static user preferences and the dynamic sequential preferences in a coherent way. Finally, we evaluate the proposed approach using real-world data sets from several location-based social networks (LBSNs). The experimental results show that our method outperforms the state-of-the-art approaches for POI recommendation in terms of various metrics, such as F-measure and NDCG, with a significant margin. <s> BIB015
Temporal influence is of vital importance for POI recommendation because physical constraints on the check-in activity result in specific patterns. Temporal influence in a POI recommendation system performs in three aspects: periodicity, consecutiveness, and non-uniformness. Users' check-in behaviors in LBSNs exhibit periodic pattern. For instance, users always check-in restaurants at noon and have fun in nightclubs at night. Also users visit places around the office on weekdays and spend time in shopping malls on weekends. Figure 9 shows the periodic pattern in a day and a week, respectively. The check-in activity exhibits this kind periodic pattern, visiting the same or similar POIs at the same time slot. This observation inspires the researches exploiting this periodic pattern for POI recommendation BIB004 BIB007 BIB008 BIB012 . Consecutiveness performs in the check-in sequences, especially in the successive check-ins. Successive check-ins are usually correlated. For instance, users may have fun in a nightclub after diner in a restaurant. This frequent check-in pattern implies that the nightclub and the restaurant are geographically adjacent and correlated from the perspective of venue function. Data analysis on Foursquare and Gowalla in BIB013 explores the spatial and temporal property of successive check-ins in Fig. 10 , namely, complementary cumulative distributive function (CCDF) of intervals and distances between successive check-ins. It is observed that many successive check-ins are highly correlated: over 40% and 60% successive check-in behaviors happen in less than 4 hours since last check-in in Foursquare and Gowalla respectively; about 90% successive check-ins happen in less than 32 kilometers (half an hour driving distance) in Foursquare and Gowalla. Researchers exploit Markov chain to model the sequential pattern BIB009 BIB011 BIB014 BIB010 . Researches in BIB009 BIB011 assume that two checked-in POIs in a short term are highly correlated and employ the factorized personalized Markov Chain (FPMC) model BIB002 to recommend successive POIs. Zhang et al. BIB010 propose an additive Markov model to learn the transitive probability between two successive check-ins. Zhao et al. BIB013 exploit a latent factorization model to capture the consecutiveness, which is similar to the FPMC model in mathematical. Fig. 11 Demonstration of non-uniformness BIB006 The non-uniformness feature depicts a user's check-in preference variance at different hours of a day, or at different months of a year, or at different days of a week BIB007 . As shown in Fig. 11 , the study in BIB007 demonstrates an example of a random user's aggregated check-in activities on the user's top five most visited POIs. It is observed that a user's check-in preference changes at different hours of a day-the most frequent checked-in POI alters at different hours. Similar temporal characteristics also appear at different months of a year, and different days of a week as well. This non-uniformness feature can be explained from the user's daily life customs: 1) A user may check-in at POIs around the user's home in the morning hours, visit places around the office in the day hours, and have fun in bars in night hours. 2) A user may visit more locations around the user's home or office on weekdays. On weekends, the user may check-in more at shopping malls or vacation places. 3) At different months, a user may have different hobbies for food and entertainment. For instance, a user would visit ice cream shops in the months of summer while visit hot pot restaurants in the months of winter. Although the temporal feature has been modeled to enhance the recommendation task, e.g., movie recommendation BIB001 BIB003 and web service recommendation BIB005 , the distinct temporal characteristics mentioned above make the previous temporal models unsatisfactory for POI recommendation. For example, the work in BIB001 mines temporal patterns of the Netflix data and incorporates the temporal influence into a matrix factorization model to capture the user preference trends in a long range. The studies in BIB003 BIB005 model the preference variance using a tensor factorization model. Since the previous proposed temporal models cannot meet the POI recommendation scenario, a variety of systems are proposed to enhance POI recommendation performance BIB009 BIB007 BIB015 BIB008 BIB013
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Although online recommendation systems such as recommendation of movies or music have been systematically studied in the past decade, location recommendation in Location Based Social Networks (LBSNs) is not well investigated yet. In LBSNs, users can check in and leave tips commenting on a venue. These two heterogeneous data sources both describe users' preference of venues. However, in current research work, only users' check-in behavior is considered in users' location preference model, users' tips on venues are seldom investigated yet. Moreover, while existing work mainly considers social influence in recommendation, we argue that considering venue similarity can further improve the recommendation performance. In this research, we ameliorate location recommendation by enhancing not only the user location preference model but also recommendation algorithm. First, we propose a hybrid user location preference model by combining the preference extracted from check-ins and text-based tips which are processed using sentiment analysis techniques. Second, we develop a location based social matrix factorization algorithm that takes both user social influence and venue similarity influence into account in location recommendation. Using two datasets extracted from the location based social networks Foursquare, experiment results demonstrate that the proposed hybrid preference model can better characterize user preference by maintaining the preference consistency, and the proposed algorithm outperforms the state-of-the-art methods. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> In this paper, we address the problem of recommending Point-of-Interests (POIs) to users in a location-based social network. To the best of our knowledge, we are the first to propose the ST (Social Topic) model capturing both the social and topic aspects of user check-ins. We conduct experiments on real life data sets from Foursquare and Yelp. We evaluate the effectiveness of ST by evaluating the accuracy of top-k POI recommendation. The experimental results show that ST achieves better performance than the state-of-the-art models in the areas of social network-based recommender systems, and exploits the power of the location-based social network that has never been utilized before. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Newly emerging location-based and event-based social network services provide us with a new platform to understand users' preferences based on their activity history. A user can only visit a limited number of venues/events and most of them are within a limited distance range, so the user-item matrix is very sparse, which creates a big challenge to the traditional collaborative filtering-based recommender systems. The problem becomes even more challenging when people travel to a new city where they have no activity information. In this article, we propose LCARS, a location-content-aware recommender system that offers a particular user a set of venues (e.g., restaurants and shopping malls) or events (e.g., concerts and exhibitions) by giving consideration to both personal interest and local preference. This recommender system can facilitate people's travel not only near the area in which they live, but also in a city that is new to them. Specifically, LCARS consists of two components: offline modeling and online recommendation. The offline modeling part, called LCA-LDA, is designed to learn the interest of each individual user and the local preference of each individual city by capturing item cooccurrence patterns and exploiting item contents. The online recommendation part takes a querying user along with a querying city as input, and automatically combines the learned interest of the querying user and the local preference of the querying city to produce the top-k recommendations. To speed up the online process, a scalable query processing technique is developed by extending both the Threshold Algorithm (TA) and TA-approximation algorithm. We evaluate the performance of our recommender system on two real datasets, that is, DoubanEvent and Foursquare, and one large-scale synthetic dataset. The results show the superiority of LCARS in recommending spatial items for users, especially when traveling to new cities, in terms of both effectiveness and efficiency. Besides, the experimental analysis results also demonstrate the excellent interpretability of LCARS. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Content Indication <s> Location recommendation plays an essential role in helping people find places they are likely to enjoy. Though some recent research has studied how to recommend locations with the presence of social network and geographical information, few of them addressed the cold-start problem, specifically, recommending locations for new users. Because the visits to locations are often shared on social networks, rich semantics (e.g., tweets) that reveal a person's interests can be leveraged to tackle this challenge. A typical way is to feed them into traditional explicit-feedback content-aware recommendation methods (e.g., LibFM). As a user's negative preferences are not explicitly observable in most human mobility data, these methods need draw negative samples for better learning performance. However, prior studies have empirically shown that sampling-based methods don't perform as well as a method that considers all unvisited locations as negative but assigns them a lower confidence. To this end, we propose an Implicit-feedback based Content-aware Collaborative Filtering (ICCF) framework to incorporate semantic content and steer clear of negative sampling. For efficient parameter learning, we develop a scalable optimization algorithm, scaling linearly with the data size and the feature size. Furthermore, we offer a good explanation to ICCF, such that the semantic content is actually used to refine user similarity based on mobility. Finally, we evaluate ICCF with a large-scale LBSN dataset where users have profiles and text content. The results show that ICCF outperforms LibFM of the best configuration, and that user profiles and text content are not only effective at improving recommendation but also helpful for coping with the cold-start problem. <s> BIB005
In LBSNs, users generate contents including tips and ratings for POIs and also photos about the POIs as well. Although contents do not accompany each check-in record, the available contents, especially the user comments, can be used to enhance the POI recommendation BIB004 BIB002 BIB005 BIB001 BIB003 . Because user comments provide extra information from the shared tips beyond the checkin behavior, e.g., the preference on a location. For instance, the check-in at an Italian restaurant does not necessarily mean the user likes this restaurant. Probably the user just likes Italian food but not this restaurant, even dislikes the taste of this restaurant. Compared with the check-in activity, the comments usually provide explicit preference information, which is a kind of complementary explanations for the check-in behavior. As a result, the comments are able to be used to deeply understand the users' check-in behavior and improve POI recommendation BIB004 BIB002 BIB001 . The research in BIB001 is the first and representative work exploiting the comments to strengthen the POI recommendation. Yang et al. BIB001 propose a sentiment-enhanced location recommendation method, which utilizes the user comments to adjust the check-in preference estimation. As shown in Fig. 12 , the raw tips in LBSNs are collected and analysed using natural language processing techniques, including language detection, sentence split, POS identification, processed by SentiWordNet, and Noun phrase chunking. Then, each comment is given a sentiment score. According to the estimated sentiment, a preference score of one user at a POI is generated. Figure 12 also shows how to handle a comment example: transforming it to several noun phases such as "Reasonable price", "Good place", and "Long waiting time", generating a sentiment score of 0.3, and mapping this value to the preference measure of 5. Moreover, through combining the preference measure from sentiment analysis and the check-in frequency, the proposed model in BIB001 generates a modified ratingĈ i,j measuring the preference of user u i at a POI l j . Accordingly, the traditional matrix factorization method can be employed to recommend POIs through the following objective, arg min where U i and L j are latent features of user u i and l j respectively,Ĉ i,j is the combined rating value, α and β are regularizations.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Many existing approaches to collaborative filtering can neither handle very large datasets nor easily deal with users who have very few ratings. In this paper we present the Probabilistic Matrix Factorization (PMF) model which scales linearly with the number of observations and, more importantly, performs well on the large, sparse, and very imbalanced Netflix dataset. We further extend the PMF model to include an adaptive prior on the model parameters and show how the model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings. When the predictions of multiple PMF models are linearly combined with the predictions of Restricted Boltzmann Machines models, we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix's own system. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user. In this introductory chapter we briefly discuss basic RS ideas and concepts. Our main goal is to delineate, in a coherent and structured way, the chapters included in this handbook and to help the reader navigate the extremely rich and detailed content that the handbook offers. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Location sharing services (LSS) like Foursquare, Gowalla, and Facebook Places support hundreds of millions of user-driven footprints (i.e., "checkins"). Those global-scale footprints provide a unique opportunity to study the social and temporal characteristics of how people use these services and to model patterns of human mobility, which are significant factors for the design of future mobile+location-based services, traffic forecasting, urban planning, as well as epidemiological models of disease spread. In this paper, we investigate 22 million checkins across 220,000 users and report a quantitative assessment of human mobility patterns by analyzing the spatial, temporal, social, and textual aspects associated with these footprints. We find that: (i) LSS users follow the “Levy Flight” mobility pattern and adopt periodic behaviors; (ii) While geographic and economic constraints affect mobility patterns, so does individual social status; and (iii) Content and sentiment-based analysis of posts associated with checkins can provide a rich source of context for better understanding how users engage with these services. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Due to the prevalence of personalization and information filtering applications, modeling users' interests on the Web has become increasingly important during the past few years. In this paper, aiming at providing accurate personalized Web site recommendations for Web users, we propose a novel probabilistic factor model based on dimensionality reduction techniques. We also extend the proposed method to collective probabilistic factor modeling, which further improves model performance by incorporating heterogeneous data sources. The proposed method is general, and can be applied to not only Web site recommendations, but also a wide range of Web applications, including behavioral targeting, sponsored search, etc. The experimental analysis on Web site recommendation shows that our method outperforms other traditional recommendation approaches. Moreover, the complexity analysis indicates that our approach can be applied to very large datasets since it scales linearly with the number of observations. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Fused Model <s> Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques. <s> BIB007
The fused model usually establishes a model for each influential factor and combines their recommended results with suggestions from the collaborative filtering model BIB002 that captures user preference on POIs. Since social influence provides limited improvements in POI recommendation and user comments are usually missing in users' check-ins, geographical influence and temporal influence constitute two important factors for POI recommendation. Hence, a typical fused model BIB006 BIB003 BIB007 recommends POIs through combining the traditional collaborative filtering methods and influential factors, especially including geographical influence or temporal influence. In BIB004 , Cheng et al. employ probabilistic matrix factorization (PMF) BIB001 and probabilistic factor model (PFM) BIB005 to learn user preference for recommending POIs. Suppose the number of users is m, and the number of POIs is n. U i and L j denote the latent feature of user u i and POI l j . PMF based method assumes Gaussian distribution on observed check-in data and Gaussian priors on the user latent feature matrix U and POI latent feature matrix L. Then, the objective function to learn the model is as follows, where g(x) = 1 1+e −x is the logistic function, c ij is the checked-in frequency of user u i at POI l j . I ij is the indicator function to record the check-in state of u i at l j . Namely, I ij equals one when the i-th user has checked-in at j-th POI; otherwise zero. After learning the user and POI latent features, the preference score of u i over l j is measured by the following score function, where σ is the sigmoid function. In addition, the geographical influence can be modeled through MGM, shown in Eq. (3) of Sect. 3.1. Then, a fused model is proposed to combine user preferences learned from Eq. (10) and geographical influence modeled in Eq. (3). The proposed model determines the probability P ul of a user u visiting a location l via the product of the preference socre estimation and the probability of whether a user will visit that place in terms of geographical influence , where P (l|C u ) is calculated via the MGM and P (F ul ) encodes a users preference on a location.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Representative Work for MF-based Joint Model <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Representative Work for MF-based Joint Model <s> Point-of-Interest (POI) recommendation has become an important means to help people discover attractive locations. However, extreme sparsity of user-POI matrices creates a severe challenge. To cope with this challenge, viewing mobility records on location-based social networks (LBSNs) as implicit feedback for POI recommendation, we first propose to exploit weighted matrix factorization for this task since it usually serves collaborative filtering with implicit feedback better. Besides, researchers have recently discovered a spatial clustering phenomenon in human mobility behavior on the LBSNs, i.e., individual visiting locations tend to cluster together, and also demonstrated its effectiveness in POI recommendation, thus we incorporate it into the factorization model. Particularly, we augment users' and POIs' latent factors in the factorization model with activity area vectors of users and influence area vectors of POIs, respectively. Based on such an augmented model, we not only capture the spatial clustering phenomenon in terms of two-dimensional kernel density estimation, but we also explain why the introduction of such a phenomenon into matrix factorization helps to deal with the challenge from matrix sparsity. We then evaluate the proposed algorithm on a large-scale LBSN dataset. The results indicate that weighted matrix factorization is superior to other forms of factorization models and that incorporating the spatial clustering phenomenon into matrix factorization improves recommendation performance. <s> BIB002
In this section, we report two representative researches about the MF-based joint model, which incorporate temporal effect and geographical effect into a matrix factorization framework, respectively. In BIB001 , Gao et al. propose a Location Recommendation framework with Temporal effects (LRT), which incorporates temporal influence into a matrix factorization model. The LRT model contains two assumptions on temporal effect: 1) non-uniformness, users' check-in preferences change at different hours of one day; 2) consecutiveness, users' check-in preferences are similar in consecutive time slots. To model the non-uniformness, LRT separates a day into T slots, and defines time-dependent user latent feature U t ∈ R m×d , where m is the number of users, d is the latent feature dimension, and t ∈ [1, T ] indexes time slots. Suppose that C t ∈ R m×n denotes a matrix depicting the check-in frequency at temporal state t. U and L denote the latent feature matrix for user and POI, respectively. Using the non-negative matrix factorization to model the POI recommendation system, the time-dependent objective function is as follows, where Y t is the corresponding indicator matrix, α and β are the regularizations. Furthermore, the temporal consecutiveness inspires to minimize the following term, where φ i (t, t − 1) ∈ [0, 1] is defined as a temporal coefficient that measures user preference similarity between temporal state t and t − 1. The temporal coefficient could be calculated via cosine similarity according to users' checkins at state t and t − 1. To represent the Eq. (14) in matrix form, we get where Σ t ∈ R m×m is the diagonal temporal coefficient matrix among m users. Combining the two minimization targets, the objective function of the LRT model is gained as follows, where λ is a non-negative parameter to control the temporal regularization. User and location latent representations can be learned by solving the above optimization problem. Then, the user check-in preferenceĈ t (i, j) at each temporal state can be estimated by the product of user latent feature and location feature (U t (i, :)L(j, :) T ). Recommending POIs for users is to find POIs with higher value ofĈ(i, j). To aggregate different temporal states' contributions, C(i, j) is estimated througĥ where f (·) is an aggregation function, e.g., sum, mean, maximum, and voting operation. In BIB002 , Lian et al. propose the GeoMF model to incorporate geographical influence into a weighted regularized matrix factorization model (WRMF) [22, Fig. 13 Demonstration of GeoMF model BIB002 43]. WRMF is a popular model for one-class collaborative filtering problem, learning implicit feedback for recommendations. GeoMF treats the user checkin as implicit feedback and leverages a 0/1 rating matrix to represent the user check-ins. Furthermore, GeoMF employs an augmented matrix to recover the rating matrix, as shown in Fig. 13 . Each entry in the rating matrix is the combination of two interactions: user feature and POI feature, users' activity area representation and POIs' influence area representation. Suppose there are m users and n POIs. The latent feature dimension is d for user and POI representations, and is l for users' activity area and POIs' influence area representations. Then the estimated rating matrix can be formulated as, whereR ∈ R m×n is the estimated matrix, P ∈ R m×d and Q ∈ R n×d are user latent matrix and POI latent matrix, respectively. In addition, X ∈ R m×l and Y ∈ R n×l are user activity area representation matrix and POI activity area representation matrix, respectively. Define W as the binary weighted matrix whose entry w ui is set as follows, where c ui is user u's check-in frequency at POI l i , α(c ui ) > 0 is a monotonically increasing function with respect to c ui . Following the scheme of WRMF model, the objective function of GeoMF is formulated as, arg min where Y is POIs' influence area matrix generated from a Gaussian kernel function, P , Q, and X are parameters that need to learn, and γ and λ are regularizations. After learning the latent features from Eq. (20), the proposed model estimates the check-in possibility according to Eq. (18), and then recommends the POIs with higher values for each user.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> Personalized point-of-interest (POI) recommendation is a significant task in location-based social networks (LBSNs) as it can help provide better user experience as well as enable third-party services, e.g., launching advertisements. To provide a good recommendation, various research has been conducted in the literature. However, pervious efforts mainly consider the "check-ins" in a whole and omit their temporal relation. They can only recommend POI globally and cannot know where a user would like to go tomorrow or in the next few days. In this paper, we consider the task of successive personalized POI recommendation in LBSNs, which is a much harder task than standard personalized POI recommendation or prediction. To solve this task, we observe two prominent properties in the check-in sequence: personalized Markov chain and region localization. Hence, we propose a novel matrix factorization method, namely FPMC-LR, to embed the personalized Markov chains and the localized regions. Our proposed FPMC-LR not only exploits the personalized Markov chain in the check-in sequence, but also takes into account users' movement constraint, i.e., moving around a localized region. More importantly, utilizing the information of localized regions, we not only reduce the computation cost largely, but also discard the noisy information to boost recommendation. Results on two real-world LBSNs datasets demonstrate the merits of our proposed FPMC-LR. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> In this paper, we address the problem of personalized next Point-of-interest (POI) recommendation which has become an important and very challenging task in location-based social networks (LBSNs), but not well studied yet. With the conjecture that, under different contextual scenario, human exhibits distinct mobility patterns, we attempt here to jointly model the next POI recommendation under the influence of user's latent behavior pattern. We propose to adopt a third-rank tensor to model the successive check-in behaviors. By incorporating softmax function to fuse the personalized Markov chain with latent pattern, we furnish a Bayesian Personalized Ranking (BPR) approach and derive the optimization criterion accordingly. Expectation Maximization (EM) is then used to estimate the model parameters. Extensive experiments on two large-scale LBSNs datasets demonstrate the significant improvements of our model over several state-of-the-art methods. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Taxonomy by Task <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB007
In terms of whether to bias to the recent check-in, we categorize the POI recommendation task as general POI recommendation and successive POI recommendation. General POI recommendation in LBSNs is first proposed in BIB001 , which recommends the top-N POIs for users, similar to movie recommendation task in Netflix competition. Further researches observe that two successive check-ins are significantly correlated in high probability, as shown in Fig. 10 . Bao et al. BIB002 employ the recent check-in's information to recommend POIs for online scenario. Moreover, Cheng et al. BIB003 propose the successive POI recommendation that provides favorite recommendations sensitive to the user's recent check-in. Namely, successive POI recommendation does not recommend users a general list of POIs but a list sensitive to their recent check-ins. Because successive POI recommendation takes advantage of the recent check-in information, it strikingly improves system performance on the recall metric. Hence, several studies BIB004 BIB006 BIB005 BIB007 are proposed for this specific POI recommendation task.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> General POI Recommendation <s> Given the abundance of online information available to mobile users, particularly tourists and weekend travelers, recommender systems that effectively filter this information and suggest interesting participatory opportunities will become increasingly important. Previous work has explored recommending interesting locations; however, users would also benefit from recommendations for activities in which to participate at those locations along with suitable times and days. Thus, systems that provide collaborative recommendations involving multiple dimensions such as location, activities and time would enhance the overall experience of users.The relationship among these dimensions can be modeled by higher-order matrices called tensors which are then solved by tensor factorization. However, these tensors can be extremely sparse. In this paper, we present a system and an approach for performing multi-dimensional collaborative recommendations for Who (User), What (Activity), When (Time) and Where (Location), using tensor factorization on sparse user-generated data. We formulate an objective function which simultaneously factorizes coupled tensors and matrices constructed from heterogeneous data sources. We evaluate our system and approach on large-scale real world data sets consisting of 588,000 Flickr photos collected from three major metro regions in USA. We compare our approach with several state-of-the-art baselines and demonstrate that it outperforms all of them. <s> BIB008
The general POI recommendation task recommends the top-N POIs for users, similar to movie recommendation task in Netflix competition. Researchers propose a variety of models to incorporate different influential factors, e.g., geographical influence and temporal influence, to fulfill this task BIB005 BIB007 BIB006 BIB003 . In the following, we report a recent representative model for this task. In BIB007 , Li et al. propose the geographical factorization method (Geo-FM), which employs the WARP loss to learn the recommended POI list. The checkin probability is assumed to be affected by two aspects: user preference and geographical influence, which are modeled by the interaction between the user and the target POI and the interaction between the user and neighboring POIs of the target POI. Further, a weight utility function is introduced to measure different neighbors' contribution in the geographical influence. For the neighbor l of target POI l, we set the weight w l,l = (0.5 + d(l, l )) −1 , where d(l, l ) denotes the distance between POI l and l . In practice, w l,l may be normalized by devided by the sum of all values. Further, given user u and POI l, we use u BIB004 u and u BIB008 u to denote the user latent feature for user preference and geographical influence, and l l to denote the POI latent feature. Then, the recommendation score y ul could be formulated as, where operator (·) denotes the inner product, and N k (l) denotes the k-nearest neighbors of POI l. After defining the recommendation score function, Geo-FM employs the WARP loss to learn the model. A user's preference ranking is summarized as that the higher the check-in frequency is, the more the POI is preferred by a user. In other words, for user u, POI l would be ranked higher than l if f ul > f ul , where f ul denotes the frequency of user u at POI l. Given a user u and a checked-in POI l, modeling the rank order is equivalent to minimize the following incompatibility, where U and L denote the user set and POI set respectively, is the error tolerance hyperparameter, and I(·) denotes the indicator function. By modeling the incompatibility for all check-ins in the set D, we get the objective function of the Geo-FM, where E(·) is a function to convert the ranking incompatibility into a loss value: as the candidate POIs the user u has not visited in POI set L. After learning the objective function in Eq. BIB001 , the check-in possibility of user u over a candidate POI l ∈ L C u could be estimated by Eq. BIB002 . Then, the POI recommendation task could be achieved through ranking the candidate POIs and selecting the top N POIs with the highest estimated possibility values for each user.
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Successive POI Recommendation <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB004
Successive POI recommendation, as a natural extension of general POI recommendation, is recently proposed and has attracted great research interest BIB001 BIB002 BIB003 BIB004 . Different from general POI recommendation that focuses only on estimating users preferences on POIs, successive POI recommendation provides satisfied recommendations promptly based on users most recent checked-in location, which requires not only the preference modeling from users but also the accurate correlation analysis between POIs. In the following, we report a recent representative model for this task. In BIB004 , Zhao et al. propose the STELLAR system, which aims to provide time-aware successive POI recommendations. The system attempts to rank the POIs via a score function f : U × L × T × L → R, which maps a four-tuple tensor to real values. Here, U, L, and T denote the set of users, the set of POIs, and the set of time ids, respectively. The score function f (u, l q , t, l c ) that represents the "successive check-in possibility", is defined for user u to a candidate POI l c at the time stamp t given the user's last check-in as a query POI l q .
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Even though human movement and mobility patterns have a high degree of freedom and variation, they also exhibit structural patterns due to geographic and social constraints. Using cell phone location data, as well as data from two online location-based social networks, we aim to understand what basic laws govern human motion and dynamics. We find that humans experience a combination of periodic movement that is geographically limited and seemingly random jumps correlated with their social networks. Short-ranged travel is periodic both spatially and temporally and not effected by the social network structure, while long-distance travel is more influenced by social network ties. We show that social relationships can explain about 10% to 30% of all human movement, while periodic behavior explains 50% to 70%. Based on our findings, we develop a model of human mobility that combines periodic short range movements with travel due to the social network structure. We show that our model reliably predicts the locations and dynamics of future human movement and gives an order of magnitude better performance than present models of human mobility. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Location-based social networks (LBSNs) have become a popular form of social media in recent years. They provide location related services that allow users to “check-in” at geographical locations and share such experiences with their friends. Millions of “check-in” records in LBSNs contain rich information of social and geographical context and provide a unique opportunity for researchers to study user’s social behavior from a spatial-temporal aspect, which in turn enables a variety of services including place advertisement, traffic forecasting, and disaster relief. In this paper, we propose a social-historical model to explore user’s check-in behavior on LBSNs. Our model integrates the social and historical effects and assesses the role of social correlation in user’s check-in behavior. In particular, our model captures the property of user’s check-in history in forms of power-law distribution and short-term effect, and helps in explaining user’s check-in behavior. The experimental results on a real world LBSN demonstrate that our approach properly models user’s checkins and shows how social and historical ties can help location prediction. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> Location-based social networks (LBSNs) have attracted an increasing number of users in recent years. The availability of geographical and social information of online LBSNs provides an unprecedented opportunity to study the human movement from their socio-spatial behavior, enabling a variety of location-based services. Previous work on LBSNs reported limited improvements from using the social network information for location prediction; as users can check-in at new places, traditional work on location prediction that relies on mining a user's historical trajectories is not designed for this "cold start" problem of predicting new check-ins. In this paper, we propose to utilize the social network information for solving the "cold start" location prediction problem, with a geo-social correlation model to capture social correlations on LBSNs considering social networks and geographical distance. The experimental results on a real-world LBSN demonstrate that our approach properly models the social correlations of a user's new check-ins by considering various correlation strengths and correlation measures. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Data Sources <s> The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations. <s> BIB005
Gowalla, Brightkite, and Foursquare are famous benchmark datasets available for evaluating a POI recommendation model. In this subsection, we briefly introduce these datasets and describe the statistics, shown in Table 2 . BIB001 4,491,143 check-ins from 58,228 users Gowalla 1 BIB001 6,442,890 check-ins from 196,591 users Gowalla 2 BIB002 4,128,714 check-ins from 53,944 users Foursquare 1 BIB003 2,073,740 check-ins from 18,107 users Foursquare 2 BIB004 1,385,223 check-ins from 11,326 users Foursquare 3 BIB005 325,606 check-ins from 80,606 users
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> We address the problems of 1/ assessing the confidence of the standard point estimates, precision, recall and F-score, and 2/ comparing the results, in terms of precision, recall and F-score, obtained using two different methods. To do so, we use a probabilistic setting which allows us to obtain posterior distributions on these performance indicators, rather than point estimates. This framework is applied to the case where different methods are run on different datasets from the same source, as well as the standard situation where competing results are obtained on the same data. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Metrics <s> The problem of point of interest (POI) recommendation is to provide personalized recommendations of places of interests, such as restaurants, for mobile users. Due to its complexity and its connection to location based social networks (LBSNs), the decision process of a user choose a POI is complex and can be influenced by various factors, such as user preferences, geographical influences, and user mobility behaviors. While there are some studies on POI recommendations, it lacks of integrated analysis of the joint effect of multiple factors. To this end, in this paper, we propose a novel geographical probabilistic factor analysis framework which strategically takes various factors into consideration. Specifically, this framework allows to capture the geographical influences on a user's check-in behavior. Also, the user mobility behaviors can be effectively exploited in the recommendation model. Moreover, the recommendation model can effectively make use of user check-in count data as implicity user feedback for modeling user preferences. Finally, experimental results on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art latent factor models with a significant margin. <s> BIB004
Most of POI recommendation systems utilize metrics of precision and recall, which are two general metrics to evaluate the model performance in information retrieval BIB002 BIB001 . To see the balance of precision and recall, F-score is also introduced in some work. Since the precision and recall are low for POI recommendation, some researches BIB004 BIB003 introduce one relative metric, which measures the model comparative performance over random selection. The precision and recall in the top-N recommendation system are denoted as P@N and R@N , respectively. P@N measures the ratio of recovered POIs to the N recommended POIs, and R@N means the ratio of recovered POIs to the set of POIs in the testing data. For each user u ∈ U , L T u denotes the set of correspondingly visited POIs in the test data, and L R u denotes the set of recommended POIs. Then, the definitions of P@N and R@N are formulated as follows, Further, F-score is the harmonic mean of precision and recall. Therefore, the F-score is defined as, In order to better compare the results, a relative metric is introduced. Relative precision@N and recall@N are denoted as r-P@N and r-R@N , respectively. Let L C u denote the candidate POIs for each user u, namely POIs the user has not checked-in, then precision and recall of a random recommendation system is , respectively. Then, the relative precision@N and recall@N are defined as,
A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach. <s> BIB001 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> In ranking with the pairwise classification approach, the loss associated to a predicted ranked list is the mean of the pairwise classification losses. This loss is inadequate for tasks like information retrieval where we prefer ranked lists with high precision on the top of the list. We propose to optimize a larger class of loss functions for ranking, based on an ordered weighted average (OWA) (Yager, 1988) of the classification losses. Convex OWA aggregation operators range from the max to the mean depending on their weights, and can be used to focus on the top ranked elements as they give more weight to the largest losses. When aggregating hinge losses, the optimization problem is similar to the SVM for interdependent output spaces. Moreover, we show that OWA aggregates of margin-based classification losses have good generalization properties. Experiments on the Letor 3.0 benchmark dataset for information retrieval validate our approach. <s> BIB002 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced "sibling" precision metric, where our method also obtains excellent results. <s> BIB003 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> This tutorial is concerned with a comprehensive introduction to the research area of learning to rank for information retrieval. In the first part of the tutorial, we will introduce three major approaches to learning to rank, i.e., the pointwise, pairwise, and listwise approaches, analyze the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures, evaluate the performance of these approaches on the LETOR benchmark datasets, and demonstrate how to use these approaches to solve real ranking applications. In the second part of the tutorial, we will discuss some advanced topics regarding learning to rank, such as relational ranking, diverse ranking, semi-supervised ranking, transfer ranking, query-dependent ranking, and training data preprocessing. In the third part, we will briefly mention the recent advances on statistical learning theory for ranking, which explain the generalization ability and statistical consistency of different ranking methods. In the last part, we will conclude the tutorial and show several future research directions. <s> BIB004 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Recently, location-based social networks (LBSNs), such as Gowalla, Foursquare, Facebook, and Brightkite, etc., have attracted millions of users to share their social friendship and their locations via check-ins. The available check-in information makes it possible to mine users' preference on locations and to provide favorite recommendations. Personalized Point-of-interest (POI) recommendation is a significant task in LBSNs since it can help targeted users explore their surroundings as well as help third-party developers to provide personalized services. To solve this task, matrix factorization is a promising tool due to its success in recommender systems. However, previously proposed matrix factorization (MF) methods do not explore geographical influence, e.g., multi-center check-in property, which yields suboptimal solutions for the recommendation. In this paper, to the best of our knowledge, we are the first to fuse MF with geographical and social influence for POI recommendation in LBSNs. We first capture the geographical influence via modeling the probability of a user's check-in on a location as a Multicenter Gaussian Model (MGM). Next, we include social information and fuse the geographical influence into a generalized matrix factorization framework. Our solution to POI recommendation is efficient and scales linearly with the number of observations. Finally, we conduct thorough experiments on a large-scale real-world LBSNs dataset and demonstrate that the fused matrix factorization framework with MGM utilizes the distance information sufficiently and outperforms other state-of-the-art methods significantly. <s> BIB005 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion. <s> BIB006 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Retrieval tasks typically require a ranking of items given a query. Collaborative filtering tasks, on the other hand, learn to model user's preferences over items. In this paper we study the joint problem of recommending items to a user with respect to a given query, which is a surprisingly common task. This setup differs from the standard collaborative filtering one in that we are given a query x user x item tensor for training instead of the more traditional user x item matrix. Compared to document retrieval we do have a query, but we may or may not have content features (we will consider both cases) and we can also take account of the user's profile. We introduce a factorized model for this new task that optimizes the top-ranked items returned for the given query and user. We report empirical results where it outperforms several baselines. <s> BIB007 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Location-based social networks (LBSNs) have attracted an inordinate number of users and greatly enriched the urban experience in recent years. The availability of spatial, temporal and social information in online LBSNs offers an unprecedented opportunity to study various aspects of human behavior, and enable a variety of location-based services such as location recommendation. Previous work studied spatial and social influences on location recommendation in LBSNs. Due to the strong correlations between a user's check-in time and the corresponding check-in location, recommender systems designed for location recommendation inevitably need to consider temporal effects. In this paper, we introduce a novel location recommendation framework, based on the temporal properties of user movement observed from a real-world LBSN dataset. The experimental results exhibit the significance of temporal patterns in explaining user behavior, and demonstrate their power to improve location recommendation performance. <s> BIB008 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Personalized recommendation systems are used in a wide variety of applications such as electronic commerce, social networks, web search, and more. Collaborative filtering approaches to recommendation systems typically assume that the rating matrix (e.g., movie ratings by viewers) is low-rank. In this paper, we examine an alternative approach in which the rating matrix is locally low-rank. Concretely, we assume that the rating matrix is low-rank within certain neighborhoods of the metric space defined by (user, item) pairs. We combine a recent approach for local low-rank approximation based on the Frobenius norm with a general empirical risk minimization for ranking losses. Our experiments indicate that the combination of a mixture of local low-rank matrices each of which was trained to minimize a ranking loss outperforms many of the currently used state-of-the-art recommendation systems. Moreover, our method is easy to parallelize, making it a viable approach for large scale real-world rank-based recommendation systems. <s> BIB009 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The rapidly growing of Location-based Social Networks (LBSNs) provides a vast amount of check-in data, which enables many services, e.g., point-of-interest (POI) recommendation. In this paper, we study the next new POI recommendation problem in which new POIs with respect to users' current location are to be recommended. The challenge lies in the difficulty in precisely learning users' sequential information and personalizing the recommendation model. To this end, we resort to the Metric Embedding method for the recommendation, which avoids drawbacks of the Matrix Factorization technique. We propose a personalized ranking metric embedding method (PRME) to model personalized check-in sequences. We further develop a PRME-G model, which integrates sequential information, individual preference, and geographical influence, to improve the recommendation performance. Experiments on two real-world LBSN datasets demonstrate that our new algorithm outperforms the state-of-the-art next POI recommendation methods. <s> BIB010 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy. <s> BIB011 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> The rapid urban expansion has greatly extended the physical boundary of users' living area and developed a large number of POIs (points of interest). POI recommendation is a task that facilitates users' urban exploration and helps them filter uninteresting POIs for decision making. While existing work of POI recommendation on location-based social networks (LBSNs) discovers the spatial, temporal, and social patterns of user check-in behavior, the use of content information has not been systematically studied. The various types of content information available on LBSNs could be related to different aspects of a user's check-in action, providing a unique opportunity for POI recommendation. In this work, we study the content information on LB-SNs w.r.t. POI properties, user interests, and sentiment indications. We model the three types of information under a unified POI recommendation framework with the consideration of their relationship to check-in actions. The experimental results exhibit the significance of content information in explaining user behavior, and demonstrate its power to improve POI recommendation performance on LBSNs. <s> BIB012 </s> A Survey of Point-of-interest Recommendation in Location-based Social Networks <s> Ranking-based Model <s> Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20% in [email protected] and [email protected] <s> BIB013
Several ranking-based models BIB010 BIB011 BIB013 have been proposed for POI recommendation recently. Most of previous methods generally attempt to estimate the user check-in probability over POIs BIB005 BIB008 BIB012 . However, for the POI recommendation task, we do not really care about the predicted check-in possibility value but the preference order. Some work has proved that it is better for the recommendation task to learn the order rather than the real value BIB006 BIB009 BIB002 BIB003 BIB007 . Bayesian personalized ranking (BPR) loss BIB006 and weighted approximate rank pairwise (WARP) loss BIB002 BIB003 are two popular criteria to learn the ranking order. Researchers in BIB010 BIB013 leverage the BPR loss to learn the model, and Li et al. BIB011 use the WARP loss. The existing work using ranking-base model has shown its advantage in model performance. Then, learning to rank, as an important technique for information retrieval BIB001 BIB004 , may be used more for POI recommendation to improve performance in the future.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Errors reported in 1946 by aircraft pilots using pulsed radar altimeters over Antarctic ice, coupled wih results of radio-wave propagation studies in both polar areas (1946-1955), led to measurements of the electrical characteristics of thick ice at high and ultra-high frequencies. These measurements produced information relative to dielectric constants, loss factors, scattering, and interface reflection data that subsequently permitted successful radio-wave penetration measurements in continental ice to depth of several hundred feet in both the Antarctic and the Arctic (1958-1960). Results indicated clearly that low-flying pilots relying on pulsed 440-Mc altimeters in poor visibility over thick ice can be fatally misled by errors inherent in these instruments. The paper presents recent data obtained by the Signal Corps pertinent to radio-wave transparency of thick ice and snow. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Radio interferometry is a technique for measuring in‐situ electrical properties and for detecting subsurface changes in electrical properties of geologic regions with very low electrical conductivity. Ice‐covered terrestrial regions and the lunar surface are typical environments where this method can be applied. The field strengths about a transmitting antenna placed on the surface of such an environment exhibit interference maxima and minima which are characteristic of the subsurface electrical properties. This paper (Part I) examines the theoretical wave nature of the electromagnetic fields about various types of dipole sources placed on the surface of a low‐loss dielectric half‐space and two‐layer earth. Approximate expressions for the fields have been found using both normal mode analysis and the saddle‐point method of integration. The solutions yield a number of important results for the radio interferometry depth‐sounding method. The half‐space solutions show that the interface modifies the directio... <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> In dry materials, physical factors control all electrical properties. The addition of a polar liquid solvent such as water or alcohol adds a host of solvent‐rock chemical interactions. These chemical interactions range from oxidation‐reduction corrosion, cation exchange, and clay‐organic processes at frequencies below 1 Hz to diffusion‐limited relaxation around colloidal particles at frequencies up to 100 MHz. Most mixing formulas are based upon physical mixing of noninteracting materials, and they fail when chemical processes appear. If the specific chemical processes are identifiable, combined physical and chemical mixing formulas must be used. The simplest systems to model are noninteracting physical mixtures of solvents with pure silica sand. The most complicated systems are mixtures of solvents with chemically surface‐reactive materials like clays and zeolites. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract Productive interpretations of ground penetrating radar surveys require an accurate understanding of electromagnetic wave radiation, propagation, and scattering in geological materials as well as accurate knowledge of the reflection characteristics of various target anomalies embedded in such materials. GPR responses and survey profiles are often interpreted on the basis of theoretical estimates and numerical simulation models of electromagnetic wave propagation in simplified representations of ground materials and by using idealized target contrasts and geometries. Alternatively, field experiments performed under controlled test conditions can also be effective in demostrating GPR system performance capabilities and in providing quantitative measurements in realistic geologic formations. Experimental research at the University of Rome “La Sapienza” and at the Italian National Research Council were initiated to develop a basic understanding of the radiation and scattering characteristics of VHF pulse-mode GPR signals in earth materials and in air with emphasis on antenna ground coupling and target backscatter responses. The results of the experimental measurements conducted in air provided baseline information on the GPR system and target reflections under lossless propagation conditions. Target response measurements at various burial depths provided a systematic data base from which target responses, propagation parameters of the medium, and relevant data processing techniques were evaluated to gain useful insights into their interpretations. Other more advanced experimental tests are planned for the future. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Existing 1D and 2D models are used to simulate ground penetrating radar (GPR) field surveys conducted in a stratified limestone terrain. The 1D model gave good agreement in a simple layered section, accounting for multiple reflections, velocity variations and attenuation. The 2D F-K model used gave a good representation of the patterns observed due to edge diffraction from a fracture in limestone, although the model could not account for the attenuation caused by irregular blocks filling the fracture. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Generally, in order to detect shallow archaeological features, such as tombs, cavities, walls, etc., ground penetrating radar (GPR) data are acquired along parallel profiles. In some cases, the data collected using the GPR method are difficult to interpret owing to the presence of low signal-to-noise (S/N) ratio. These signals can be generated by several factors that significantly influence the radar profiles. To enhance the interpretation of radar sections, three-dimensional data acquisition, radar signal processing and time-slice representation are used. ::: ::: ::: ::: The archaeological investigated as a test site was the Sabine Necropolis (700–300 BC) at Colle del Forno (Montelibretti, Roma), believed to contain unexplored underground dromos chamber tombs. The measurements were carried out along parallel profiles in a test area, using Sir System 10 (GSSI) equipped with different antennas operating at 100, 300 and 500 MHz. The spatial interval used during the survey was 20 cm. To enhance the S/N ratio, a band-pass filter and subtraction of an average trace on the field data has been applied; furthermore, the two-dimensional migration technique on all profiles collected was used in order to move diffraction effects. A time-slice representation technique was adopted to obtain a planimetric correlation between anomalous bodies at different depths. ::: ::: ::: ::: The results indicate that the three-dimensional data acquisition, processing and the time slice representation can help determine the location, depth and shapes of buried features. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract The ground-penetrating radar (GPR) is a good candidate for the exploration of the Martian subsurface because it is smaller and lighter than seismic instruments, and, due to the lack of water in the Martian rocks, has great penetration capability. The modelling of the GPR signal response has been performed by computing the dielectric properties of each simulated layer as a linear function of porosity, known values of the solids, and the nature of the material filling the voids (ice water, carbou dioxide ice, gas, liquid water). The synthetic response was computed by reflecting ray-tracing at various peak frequencies. The complex results show that reflections are due to variations in mineralogy, porosity and porc-filling material. The reflectors produced by the reflection of the electromagnetic waves provide a picture of the geometries of the layers of the subsurface and give elues on the nature of rocks. Permafrost and liquid water can be investigated, chiefly their seasonal changes can be analysed by means of repeated profiles. The use of the GPR would be a major breakthrough in the reconstruction of the past geological history of the planet. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> In monostatic ground penetrating radar (GPR) the interfaces profile can be estimated from echoes amplitude and time of delay (TOD) using a layer stripping inversion algorithm. The authors' aim is to establish a reliable processing sequence for layer stripping inversion by estimating echoes TOD that keeps into account the layers lateral continuity, and by tracking the corresponding interfaces. The authors propose first an algorithm for multitarget tracking and then they describe the application of detection/tracking to Ins pulse monostatic GPR. The system is used to estimate layer thicknesses of asphalt and concrete in pavement profiling. Detection/tracking shows a better recognition capability of the lateral continuity in near surface interfaces with respect to algorithms that employ only local detection of echoes. <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Ground penetrating radar (GPR) is a relatively new geophysical technique. The last decade has seen major advances and there is an overall sense of the technology reaching a level of maturity. The history of GPR is intertwined with the diverse applications of the technique. GPR has the most extensive set of applications of any geophysical technique. As a result, the spatial scales of applications and the diversity of instrument configurations are extensive. Both the value and the limitations of the method are better understood in the global user community. The goal of this paper is to provide a brief history of the method, a discussion of current trends and give a sense of future developments. <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Reconstruction of shallow stratigraphy of unconsolidated sediments is a topic of primary interest in several environmental, hydrological, geotechnical, and engineering applications. The identification of porous layers and the assessment of their saturation, the characterization of sediments, the identification of bedrock and the analysis of shallow layering are some examples of topics of primary interest in near-surface applications. Recent ground-penetrating radar (GPR) research demonstrates the excellent results that can be attained in the study of shallow stratigraphy. Complex stratigraphic structures, involving cross-stratification, conflicting dips, and rapid lateral and vertical particle-size variations pose a challenge to the application of single-fold (constant offset) GPR methods. ::: ::: The objectives of the present work are imaging and resolution enhancement of GPR multifold records from shallow, unconsolidated sediments. The study is based, in particular, on prestack processing and imaging of data from alluvial plain sites in northern Italy, which are characterized by different stratigraphic and sedimentological conditions. Figure 1 shows the location map of the survey. We show the results obtained on a fluvial terrace of the Isonzo River that are characterized by a complete alluvial sequence including a range of sediments (gravel to clayey loam) and range of stratigraphic structures (depositional and erosional). The water table and vadose zone are in the GPR and resistivity depth range and affect the response of the geophysical techniques, particularly the lateral and vertical resistivity and GPR velocity variations. Figure 1. ::: Map and aerial picture of the study area. The red rectangle shows the location of the 20 × 12 m study area. The site is close to the riverbank, where the different stratigraphic units identified by the geophysical survey were identified and sampled. ::: ::: ::: ::: A Mala Geoscience GPR system was equipped with shielded 250-MHz antennae for the study. Single-fold methods were used in reconnaissance surveys at all test sites. We successively performed … <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Road pavement performances are of great importance for driving comfort and safety. Monitoring and rehabilitation activities are always extremely strategic and crucial. The points of strength of advanced non-destructive techniques for road pavement monitoring essentially are: (1) reliability, (2) significance in the space domain, (3) efficiency and (4) quickness. One of the most relevant and widely used technologies is the Ground Penetrating Radar. In the field of pavement analysis its most frequent applications are the evaluation of layers thickness and voids detection. Recent experimental results put also in light the capability of Radar to identify the causes of road damages. Empirical relationships between physical and mechanical characteristics of the materials and electromagnetic parameters have been seen, established and analytical functions were proposed. Most promising and interesting evidences regard the prediction of water content. It is crucially important because water intrusion in sub-grade is one of the most important causes of loss of mechanical properties. The empirical relationships have shown a conservative and comparable trend for different materials, status conditions and radar frequencies, but variable amplitudes. General mathematical laws could be very useful to analyze the Radar scans correctly and in a more comprehensive framework A stochastically based correction of semi-empirical approach is here proposed to correlate the geophysical characteristics of the pavement���s materials (sub-grade) to the par:rmeters of the empirical model. Average dimension of grains, grading, specific surface area of grains (that is related to the hygroscopic potential) and dielectric characteristics of the dry material are primarily taken into consideration. The impact of this geophysical and stochastical model on non-destructive measurements and on the pavement management is high and it is here discussed. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> We study the impedance parameters and the energy transmitted and received by a couple of antennas working in a nonhomogeneous background. The focus is on stepped frequency ground penetrating radar (SF-GPR) prospecting. In particular, we propose a reconfiguration of the GPR system versus the frequency that accounts for the background scenario, and we show that the reconfiguration can improve the frequency behavior of the antennas significantly. Tests performed on two bow-tie antennas will be shown. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> DESIGN OF A BALUN FOR A BOW TIE ANTENNA INRECONFIGURABLE GROUND PENETRATING RADARSYSTEMSR. PersicoIstituto per il Beni Archeologici e MonumentaliConsiglio Nazionale delle RicercheVia Monteroni, Campus Universitario, Lecce 73100, ItalyN. RomanoDipartimento di Ingegneria dell’InformazioneSeconda Universitµa degli studi di NapoliVia Roma 29, Aversa 81031, ItalyF. SoldovieriIstituto per il Rilevamento Elettromagnetico dell’AmbienteConsiglio Nazionale delle RicercheVia Diocleziano 328, Napoli 80124, ItalyAbstract|This paper deals with the design of a reconflgurableantenna that resembles a monolithic UWB bow-tie antenna for GroundPenetrating Radar (GPR) applications. In particular, the attentionis focussed on the design of the balun system able to work in thefrequency band 0.3{1GHz; the efiectiveness of the design is shown byexamining the behaviour of the scattering parameters <s> BIB015 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract We analyze the dispersive characteristics of the electromagnetic guided waves (at georadar frequency) to infer the electrical properties of materials that constitute a layered wave guide. WARR (Wide Angle Reflection and Refraction) georadar acquisitions could be carried out in TE (Transverse Electric) configuration to collect the full wavefield at different offset from the source. The dispersive curves of TE modes are obtained by transforming the space-time acquisition into frequency-wavenumber domain (f-k spectrum); the relative maxima in the f-k spectrum for each frequency represent the different propagation modes. We adopt both global and local inversion algorithms for minimizing the misfit function between computed and theoretical curves in order to obtain a 1D model of the layered subsoil (thicknesses and electrical permittivity). We perform a multimodal and multilayer inversion of the dispersive events. The results of two field cases will be discussed; the first one refers to the propagation in a confined waveguide (layered subsoil) and the other in a leaky waveguide (snow cover on a glacier). <s> BIB016 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> The Romano-British site of Barcombe in East Sussex, England, has suffered heavy postdepositional attrition through reuse of the building materials for the effects of ploughing. A detailed GPR survey of the site was carried out in 2001, with results, achieved by usual radar data processing, published in 2002. ::: The current paper reexamines the GPR data using microwave tomography approach, based on a linear inverse scattering model, and a 3D visualization that permits to improve the definition of the villa plan and reexamine the possibility of detecting earlier prehistoric remains. <s> BIB017 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> M <s> Abstract Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for the detection of the reinforcing bar's corrosion but they can be invasive and destructive, often laborious, and lane closure is required and it is difficult or unreliable for any quantification of corrosion. For these reasons, bridge engineers always prefer more to use the ground penetrating radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of the GPR to detect voids, cracks or buried objects. The numerical approach has been tested on data acquired on a bridge by using a pulse GPR system specifically designed for bridge deck and pavement inspection. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays are using antennas arranged with a different polarization. The cracks, associated often to moisture increase and higher values of the dielectric constant, produce a not negligible increase of the signal amplitude. Following this, the algorithm, organized in preprocessing, processing and postprocessing stages, analyzes the signal by comparing the value of the amplitude all over the domain of the radar scan. <s> BIB018
ANY efforts from several scientific disciplines have been devoted over years to identify an effective technique capable of interpreting the hidden response of the ground reliably, and different methods of inspection have been developed accordingly. The answer to this issue has not been uniquely satisfied, since a fair number of techniques have demonstrated to be overall suited for this purpose. In this framework, groundpenetrating radar (GPR) is nowadays considered as one of the most powerful geophysical nondestructive tools, which has gained considerable interest among scientists and engineers thanks to the wide range of expertise and applications that can be covered. GPR is intrinsically a technology oriented toward applications, whose structure and electronics are relatively variable according to the target characteristics. Basically, structures and changes in material properties can be detected by GPR through the use of electromagnetic (EM) fields, which penetrate lossy dielectric materials to some depth up to their absorption. It is based on the scattering and/or reflection by changes in impedance of EM waves BIB011 . The recognition of the signal is relatively easy, as the return signal is shaped very similar to the emitted signal. The depth, shape, and EM properties of the scattering of the reflecting object affect the time delay, as well as the differences in phase, frequency, and amplitude. Going through the history of GPR technology and its use worldwide, one of the first applications can be traced back to the first half of the twentieth century and dealt with the use of radio wave propagation above and along the surface of the Earth BIB011 . The first documented application was later performed by El Said , who attempted to identify the water table depth in the Egyptian desert by knowing the distance between the receiver and the transmitter and measuring the time delay of the received signal. Over time, this technology witnessed great development within several different fields of application spanning from demining BIB001 - to lunar explorations BIB002 - , and including glaciology , archaeology , geology , BIB003 , and, of course, civil engineering . This paper aims at reviewing the state of the art on the use of GPR in Italy, from the beginning up to the most recent applications. The Italian case is worth to be deepened as one comprehensive large-scale study case, wherein the complexity of territorial, naturalistic, historical, cultural, and socioeconomical features has effectively met the flexibility and high potential offered by the GPR technology. First, the heterogeneity of its territory offers direct applications for GPR in a large range of fields, including geology, seismology, hydraulics and glaciology. Besides, the highest number of cultural UNESCO World Heritage sites [17] has generated a high sensitivity toward heritage monitoring and the use of maintenance technologies, which has progressively enhanced. In addition, with the Italian road network being one of the densest worldwide in relation to the territory available , economical investments are increasingly being addressed toward effective maintenance and rehabilitation policies by means of high-efficient survey technologies. These main features, along with other specific Italian peculiarities that will be analyzed in this review, have acted both as an impulse for spreading this technology within the national market and overseas, and as a challenge for improving its performances through high-quality scientific contributions, making Italy one of the most active and fruitful countries in the field. From a closely scientific perspective, the earliest Italian documented contribution falls slightly behind other countries like United States, Canada, and United Kingdom (see Fig. 1 ), since the first works on GPR authored by Italian researchers were released in 1995, and dealt with, respectively, the analysis of signal propagation BIB004 and geological issues BIB005 . Besides, first Italian GPR applications for archaeological purposes date back to the same period BIB006 , . In the years following, the GPR-based research has started to embrace more fields of application from different disciplines: geological investigations , the use of numerical simulation of the GPR signal for retrieving material responses BIB007 and analyzing the GPR applicability in planetary explorations BIB008 , together with the automatic detection of multilayered structures BIB009 demonstrate the growing interest of the Italian research community on GPR technologies and methodologies, since it is nowadays aligned to the highest research production standards worldwide. As shown by Fig. 2 , by 2015 Italy is within the first four countries publishing in the area of GPR. A great contribution by the Italian research to the GPR world community came from the civil engineering area, wherein considerable efforts have been devoted to the use of GPR in transport infrastructures since the first years of the noughties , and all over the last decade BIB013 - BIB018 . Lastly, it is worth mentioning the Italian contribution for the enhancement of the processing techniques of the GPR signal BIB012 - BIB016 , as well as for the development of innovative and performing hardware configurations . As for the latter, it is worth noting some important innovations, such as the development of a reconfigurable GPR system BIB014 capable to modify the EM parameters in real time for reaching higher performances BIB015 , BIB017 , and the introduction of systems equipped with antenna arrays capable to perform multi-offset measurements in real time , BIB010 . The state of the art about GPR activities in Italy is discussed in Section II according to the field of application. The selection process of the papers analyzed in this section has been made according to the following: 1) the number of citations collected in relation to the year of publication, as retrieved from the most recognized international scientific citation indexing services, and 2) scientific relevance criteria, intended as the contribution brought to the international scientific community in terms of development and novelties introduced. As for this latter point, it is worth to point out that it has to be interpreted as reflecting the scientific thought of the authors. Finally, Section III deals with conclusions and future perspectives on the applicability of GPR in Italy.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Feasibility and potential of tomography by Ground Penetrating Radar are investigated through experiments on laboratory models. The aim is the development of radar tomography procedures for inspection of structures like walls or pillars in historical buildings. Two different approaches are explored to satisfy high-resolution requirements. The first approach improves the results of classical traveltime (TT) and amplitude tomography (AT) on thin straight or curved rays through a progressive reduction of the null space of the problem. TT is a quantitative tool based on the thin ray assumption that allows a good tradeoff between robustness and resolution. AT is as robust as TT, but its results have only qualitative contents, since the energy transferred to the medium is basically unknown and the scattering effects are not taken into account. In the second approach, GPR is considered as a diffracting source, so that migration (MIG) and diffraction tomography (DT) are applied to overcome the geometrical optic approximations. While DT is in principle the best tool to invert the scattered field and to achieve the maximum resolution, MIG can be a more robust solution that requires less preprocessing of the data. All these advantages and drawbacks of the different approaches are discussed with some examples on synthetic and real data. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract The radar technology, used to perform investigations on the civil buildings, derives from that used for investigations of the ground known with the name of Georadar. This is diffusing rapidly among the investigation methodology not destructive in the field of the structural engineering. It is based on the sending of electromagnetic waves of very short length and the recording of the time of arrival and of the breadth of any signals reflected on the interface between materials with a different dielectric constant. The aim of this paper is to present the operating methodologies and the results achieved by the application in the field of radar methodologies to map utilities, and for applications to civil building with special regard to the determination of the intern morphology, to the lack of homogeneity research and defectiveness and to the determination of the location of the steel reinforcements. Specifically, the system used, made up of one apparatus of field acquisition and another of delayed processing, seems to be able to provide good planimetric and three-dimensional restitution with regard to location and placement. In this paper, special attention has been paid to the processing of the acquired data and on the interpretation of experimental tests conducted on a civil building. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The identification of vadose zone flow parameters and solute travel time from the surface to the water table are key issues for the assessment of groundwater vulnerability. In this paper we use the results of time-lapse monitoring of the vadose zone in a UK consolidated sandstone aquifer using cross-hole zero-offset radar to assess and calibrate models of water flow in the vadose zone. The site under investigation is characterized by a layered structure, with permeable medium sandstone intercalated by finer, less permeable, laminated sandstone. Information on this structure is available from borehole geophysical (gamma-ray) logs. Monthly cross-hole radar monitoring was performed from August 1999 to February 2001, and shows small changes of moisture content over time and fairly large spatial variability with depth. One-dimensional Richards’ equation modeling of the infiltration process was performed under spatially heterogeneous, steady state conditions. Both layer structure and Richards’ equation parameters were simulated using a nested Monte Carlo approach, constrained via geostatistical analysis on the gamma-ray logs and on a priori information regarding the possible range of hydraulic parameters. The results of the Monte Carlo analysis show that, in order to match the radar-derived moisture content profiles, it is necessary to take into account the vertical scale of measurements, with an averaging window size of the order of the antenna length and the Fresnel zone width. Flow parameters cannot be uniquely identified, showing that the system is over parameterized with respect to the information content of the (nearly stationary) radar profiles. Estimates of travel time of water across the vadose zone are derived from the simulation results. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Current approaches to the reconstruction of the geometry of fluvial sediments of Quaternary alluvial plains and the characterization of their internal architecture are strongly dependent on core data (1-D). Accurate 2-D and 3-D reconstructions and maps of the subsurface are needed in hydrostratigraphy, hydrogeology and geotechnical studies. The present study aims to: 1) improve current methods for geophysical imaging of the subsurface by means of VES, ERGI and GPR data, and calibration with geomorphological and geological reconstructions, 2) optimize the horizontal and vertical resolution of subsurface imaging in order to resolve sedimentary heterogeneity, and 3) check the reliability/uncertainty of the results (maps and architectural reconstructions) by comparison with exposed analogues. The method was applied to shallow (0 to 15 m) aquifers of the fluvial plain of southern Lombardy (Northern Italy). At two sites we studied fluvial sediments of meandering systems of the Last Glacial Maximum and post-glacial historical age. These sediments comprise juxtaposed and superimposed gravel–sand units with fining-upward sequences (channel-bar depositional elements), which are separated by thin and laterally discontinuous silty and sandy clay units (overbank and flood plain deposits). The sedimentary architecture has been studied at different scales in the two areas. At the scale of the depositional system, we reconstructed the subsurface over an area of 4 km 2 to a depth of 18 m (study site 1). Reconstructed sequences based on 10 boreholes and water-well stratigraphic logs were integrated with the interpretation of 10 vertical electrical soundings (VES) with Schlumberger arrays and 1570 m long dipole–dipole electrical resistivity ground imaging profiles (ERGI). In unsaturated sediments, vertical and horizontal transitions between gravel–sand units and fine-grained sediments could be mapped respectively at the meter- to decameter scale after calibration of the VES with borehole data. Similar information could be obtained in waterlogged sediments, in which the largest units could be portrayed and the lateral continuity of major hydrostratigraphic units could be assessed. Maps of apparent resistivity were combined with sand-to-clay ratio maps obtained from stratigraphic data, which substantially increased their quality. ERGI profiles added substantial information about the horizontal transitions between fine- and coarse-grained units. At the scale of depositional elements (channel-bar systems) we studied quarry exposures, over an area of about 4000 m 2 , down to 8 m below ground level (study site 2). In this case, facies analysis was performed on progressing quarry faces and integrated with a network of 165 m long ERGI profiles and 1100 m long ground-penetrating radar (GPR) profiles. Channel boundaries and accretion surfaces of point bars were resolved by both GPR and ERGI, which permitted 3-D mapping of these surfaces. Comparison between the results obtained for the two study sites demonstrates that integration of sedimentological data with geophysical imaging (ERGI and VES) enables the identification of stratigraphic units at the scale of depositional elements. Moreover, fining-upward trends and other internal features of the deposits, such as the transitions from coarse to fine-grained sediments within channel-bar complexes, could be resolved. Hence, the combination of sedimentological and geophysical methods provides a more accurate 3-D reconstruction of hydrostratigraphically significant sedimentary units compared to reconstructions based solely on borehole/point data. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> In recent years, innovative strategies such as inverse-scattering or data fusion have been suggested for the processing of GPR datasets in complex scenarios. In this framework, high-resolution concrete inspections are a challenge regarding the treatment of radar data because of the size of the datasets and the complex structures involved. In addition, the achievable depth of inspection is in many cases restricted to unacceptable limits because of the material properties of concrete and the “masking effect” of the upper layers of rebar. Thus, the application of innovative approaches to high-resolution concrete data seems to suggest itself. In this framework, this work deals with the processing of a high-resolution, dataset acquired on a concrete retaining wall via an inverse scattering technique. In particular, we show how the adoption of a strategy based on signal processing techniques and an inverse scattering approach is able to provide the mapping of the two layers of rebar. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The knowledge of moisture content changes in shallow soil layers has important environmental implications and is fundamental in fields of application such as soil science. In fact, the exchange of energy and water with the atmosphere, the mechanisms of flood generation as well as the infiltration of water and contaminant into the subsurface are primarily controlled by the presence of water in the pores of shallow soils. At the same time, the estimation of moisture content in the shallow subsurface is a difficult task. Direct measurements of water content require the recovery of soil samples for laboratory analyses: sampling is invasive and often destructive. In addition, these data are generally insufficient to yield a good spatial coverage for basin-scale investigations. In-situ assessment of soil-moisture contents, possibly at the scale of interest for distributed catchment-scale models, is therefore necessary. The goal of this paper is to assess the information contained in surface-to- surface GPR surveys for moisture content estimation under dynamic conditions. GPR data are compared against and integrated with TDR (Time Domain Reflectometry) data. TDR and surface-to-surface GPR data act at different spatial scales and two different frequency ranges. TDR, in particular, is widely used to estimate soil water content, e.g. converting bulk dielectric constant into volumetric water content values. GPR used in surface-to-surface configuration has been used increasingly to quickly image soil moisture content over large areas. Direct GPR wave velocity is measured in the ground. However, in the presence of shallow and thin low-velocity soil layers, such as the one generated by an infiltrating water front, dispersive, guided GPR waves are generated and the direct ground wave is not identifiable as a simple arrival. Under such conditions, the dispersion relation of guided waves can be estimated from field data and then inverted to obtain the properties of the guiding layers. In this paper, we analyze the GPR and TDR data collected at an experimental site of the University of Turin, during a controlled infiltration experiment. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> The possibility of material characterization through the GPR measurements, taking into account the integration with the ultrasonic technique, has been studied and possible relationships between the permittivity of materials and their bulk density are discussed. We present here two different approaches. The first one describes an attempt to correlate the mechanical strength of concrete (as well the ultrasonic velocity) with the permittivity of the material. A series of samples of concrete, characterized by different material properties, were used for georadar and ultrasonic measures, seeking correlations among experimental data. The second approach illustrates the comparison between GPR and ultrasonic techniques to detect anomalies within the concrete. A 3D tomography was performed with ultrasonic and GPR measures on a laboratory model and the data obtained are here compared. <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Ground Penetrating Radar (GPR) can assist decision making in a number of fields by enhancing our knowledge of subsurface features. Non-destructive investigations and controls of civil structures are improving day by day, however the scientific literature reports only a few documented cases of GPR applications to the detection of voids and discontinuities in hydraulic defense structures such as river embankments and levee systems. We applied GPR to the monitoring of river levees for detecting animal burrows, which may trigger levee failures by piping. The manageability and the non-invasiveness of GPR have resulted to be particularly suitable for this application. First because GPR is an extensive investigation method that enables one to rapidly cover a wide area, locating voids that are difficult and costly to locate using other intrusive methods. Second, GPR returns detailed information about the possible presence of voids and discontinuities within river embankments. We document a series of successful GPR applications to detect animal burrows in river levees. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> CPTI11 updates and improves the 2004 version of CPTI with respect to background information and structure. It is based on updated macroseismic (DBMI11; Locati et al., 2011) and instrumental databases; it contains records of foreshocks and aftershocks; for some offshore events, macroseismic earthquake parameters have been determined by means of the method by Bakun and Wentworth (1997); when both macroseismic and instrumental parameters are available, the two determinations and a default one are provided (in this case, the epicentre is selected according to expert judgement, while Mw is obtained as a weighted mean); for some events, whose macroseismic data are poor, no macroseismic parameters have been determined. CPTI11 does not include the results of some methodological developments performed in the frame of the EC project “SHARE”. It does not consider the information background provided by: Molin et al. (2008); Camassi et al. (2011); recent studies on individual earthquakes; ECOS 2009 (Faeh et al., 2011) and SisFrance, 2010, yet, which will be considered in the next version. The area covered by CPTI11 is slightly reduced with respect to the one of CPTI04 (Fig. 1). The catalogue is composed of two sections: the main one (1000-2006) and the “Etna” earthquakes, for which a specific calibration is used for determining earthquake parameters. Appendix 4 supplies the list of the events which were included in CPTI04 but not in CPTI11 and the relevant explanation. <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> We present the results of GPR surveys performed to identify the foundation plinths of 12 buildings of a school, whose presence is uncertain since the structural drawings were not available. Their effective characterization is an essential element within a study aimed at assessing the seismic vulnerability of the buildings, which are non-seismically designed structures, located in an area classified as a seismic zone after their construction. Through GPR profiles acquired by two 250 MHz antennas, both in reflection mode and in a WARR configuration, the actual geometry and depth of the building plinths were successfully identified, limiting the number of invasive tests necessary to validate the GPR data interpretation, thus enabling the choice of the most suitable sites that would not alter the serviceability of the structure. The collected data were also critically analysed with reference to local environmental noise that, if causing reflections superimposed on those of the subsoil, could undermine the success of the investigation. Due to the homogeneity of the ground, the processing and results relative to each pair of profiles carried out for all of these buildings is very similar, so the results concerning only two of them are reported. <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract In this work, three different techniques, namely time domain reflectometry (TDR), ground penetrating radar (GPR) and electrical resistivity tomography (ERT) were experimentally tested for water leak detection in underground pipes. Each technique was employed in three experimental conditions (one laboratory or two field experiments), thus covering a limited but significant set of possible practical scenarios. Results show that each of these techniques may represent a useful alternative/addition to the others. Starting from considerations on the obtained experimental results, a thorough analysis on the advantages and drawbacks of the possible adoption of these techniques for leak detection in underground pipes is provided. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Abstract Recent flood events in Northern Italy (particularly in the Veneto Region) have brought river embankments into the focus of public attention. Many of these embankments are more than 100 years old and have been repeatedly repaired, so that detailed information on their current structure is generally missing. The monitoring of these structures is currently based, for the most part, on visual inspection and localized measurements of the embankment material parameters. However, this monitoring is generally insufficient to ensure an adequate safety level against floods. For these reasons there is an increasing demand for fast and accurate investigation methods, such as geophysical techniques. These techniques can provide detailed information on the subsurface structures, are non-invasive, cost-effective, and faster than traditional methods. However, they need verification in order to provide reliable results, particularly in complex and reworked man-made structures such as embankments. In this paper we present a case study in which three different geophysical techniques have been applied: electrical resistivity tomography (ERT), frequency domain electromagnetic induction (FDEM) and Ground Penetrating Radar (GPR). Two test sites have been selected, both located in the Province of Venice (NE Italy) where the Tagliamento River has large embankments. The results obtained with these techniques have been calibrated against evidence resolving from geotechnical investigations. The pros and cons of each technique, as well as their relative merit at identifying the specific features of the embankments in this area, are highlighted. The results demonstrate that geophysical techniques can provide very valuable information for embankment characterization, provided that the data interpretation is constrained via direct evidence, albeit limited in space. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 1) Structures: <s> Volumetric water content evaluation in structures, substructures, soils, and subsurface in general is a crucial issue in a wide range of applications. The main weaknesses of subsurface moisture sensing techniques are usually related both to the lack of cost-effectiveness of measurements, and to unsuitable support scales with respect to the extension of the surface to be investigated. In this regard, ground-penetrating radar (GPR) is an increasingly used non-destructive tool specifically suited for characterization and imaging. Several GPR techniques have been developed for different application purposes. Moisture evaluation in concrete is important for diagnosing structures at early stages of deterioration, as water contributes to the transfer of degrading and corrosive agents e.g., chloride. Traditionally, research efforts have been focused on the processing of GPR signal in time domain, although more recent studies are being increasingly addressed towards frequency domain analysis, providing additional information on moisture content in concrete. Concerning the evaluation of subsurface soil water content, different models ranging from empirical to theoretical are used for converting permittivity values into moisture. In this regard, two main GPR approaches are commonly employed for permittivity evaluation in time-domain measurements, namely, the ground wave method and the reflection method. Furthermore, the use of borehole transmission measurements, traditional off-ground methods, and of an inverse modelling approach allowing for a full waveform inversion of radar signals have been developed in the past decade. More recently, a self-consistent approach based on the Rayleigh scattering theory has also allowed the direct evaluation of moisture content from frequency spectra analysis. <s> BIB015
The use of the GPR technology in structural engineering is nowadays established and wide ranging. It is worth mentioning the location of reinforcing bars and metallic conduits, the assessment of concrete lining thicknesses, the investigation of highly wet water spots in bearing structures, the detection of voids and cracks, the assessment of rebar sizes, and the three-dimensional (3-D) reconstruction of detailed structural elements BIB003 . When considering the GPR-related Italian contribution in this area, it must be mentioned the impact brought by the nature of the Italian territory. Indeed, statistics from seismic databases have identified Italy as the most seismically active country of the Mediterranean Area BIB011 . According to this, it is not surprising that one of the main focuses of the Italian research community in this field is represented by the seismic evaluation of structural elements, in terms of both prevention and damage diagnostics. Concerning seismic prevention, Barrile and Pucinotti BIB004 developed a thorough work mainly focused on the bi-dimensional (2-D) and 3-D reconstruction of structural elements in a reinforced concrete structure. To this purpose, a ground-coupled pulsed GPR system with a 1600 MHz central frequency antenna was employed on a number of beams and columns of reinforced concrete structure. Concerning the reconstruction of punctual structural elements, the work by De Domenico et al. BIB012 , focused on the exact location of the foundation plints, is also worth to be mentioned. In the study by Valle et al. BIB002 , the authors have compared two different approaches to improve the resolution of the radar surveys, by using real and synthetic data on structural elements such as walls and pillars. First, travel-time and amplitude tomography methods were applied, then migration and diffraction tomographies were performed. According to the results achieved, the authors were capable to single out advantages and drawbacks of the proposed approaches. With similar purposes, Soldovieri et al. BIB007 used a frequency domain inverse scattering approach, based on a linear model of the EM scattering. The goal of this study was to overcome the issue of the relationship between wavelength and dimension of the scatterer. The authors analytically assessed the capability of the linear inverse model in terms of scatterer imaging, by identifying the optimal frequency step and the diffraction tomography arguments. The research was supplemented by several reconstructed scenarios related to synthetic and experimental data for the simulation of real environmental conditions. With regard to the damage assessment, a similar approach specifically focused on cracks characterization was later experimented by Bavusi et al. in the town of L'Aquila, Italy. By using tomographic techniques, the authors were capable to reconstruct different lines of reinforcing steel bars and defects inner to a number of structural elements damaged by the tragically-known seismic event recorded in the territory of L'Aquila, on April 2009. A further crucial issue affecting a considerable number of buildings, structures, and infrastructures in Italy is of course related to their aging. It is explanatory reminding that a great part of the Italian highway network was realized within the 70s [48] , due to the economic growth dating back to the first years of 60s. This has reflected in a considerable amount of concrete artifacts that is nowadays turning around 50 years of service, thereby requiring important maintenance and rehabilitation activities. In addition, the economic prosperity of the 60s generated a relevant raise of the level of urbanization, which was often out of any control or regulation, being the total amount of buildings, mainly made of concrete, passed from 10.7 million in 1951 up to 19.7 million in 1991. This fact has implied nowadays a general need for effective and efficient concrete inspections in Italy, to which several GPR-based research activities have been in turn oriented. To this purpose, Capizzi et al. BIB009 aimed at evaluating the capability of GPR in assessing the strength of concrete and reconstructing buried objects, in comparison with ultrasound (US) techniques. A polyvinyl chloride (PVC) pipe was positioned inside the concrete sample. By means of GPR tomography techniques, the authors were capable to reliably and efficiently reconstruct the cavity in the sample. The strength characterization of the concrete, mainly consisting in the correlation between permittivity and compression strength, was postponed to further investigation. 2) Hydraulics: With regard to the application of GPR in hydraulic engineering, remarkable international efforts have been addressed to a wide range of research-oriented works and study cases, spanning from basic research investigations, up to the management and protection of water resources in great works of civil engineering. Above all, we can cite the reconstruction of sewer lines, the location of underground storage tanks and the mapping of water tables, up to the evaluation of moisture in various soil types and construction materials at several scales of investigation using different GPR systems and signal processing techniques , BIB015 . Under a hydrological perspective, the Italian territory is known to be extremely peculiar. A first point of uniqueness consists in the high percentage of aquifer-withdrawn potable water, which amounts to 85.6% out of the total available [52] . This fact implies a serious issue related to both the quality control of water for the safety of the users' health, and the lowering of the ratio between the water leaked during conduction and the amount of water withdrawn. In this framework, several research activities focused on the application of GPR for characterizing aquifers and detecting leakages in water pipes have been developed. Beserzio et al. BIB006 have successfully attempted at reconstructing the geometry and architecture of the fluvial stratigraphy for the Quaternary Po Plain, Italy. With the purpose of improving the ongoing imaging methodologies, the authors have compared the results outcoming from several nondestructive testing (NDT) methods, namely, the vertical electrical sounding (VES), the electrical resistivity ground imaging (ERGI), and GPR. Strengths and limits of these techniques are discussed herein, and the potential of their integration for an accurate 3-D reconstruction of sedimentary units is also shown. On the contrary, Carcione BIB001 faced the topic of the characterization of aquifers using a simulation approach. The author proposed a theoretical model capable to reproduce the behavior of radio waves in realistic media, by simulating reflection, refraction, and diffraction phenomena, in addition to the relaxation mechanism and the anisotropic properties of the medium investigated. Therefore, this method was successfully applied for preliminarily assessing the saturation of a porous media, as well as for evaluating the contamination of a sand aquifer. The infiltration process in the portion of subsurface located above an aquifer, i.e., the vadose zone, was instead analyzed by Cassiani et al. BIB008 , by comparing data from both GPR and time domain reflectometry (TDR) measurements performed over a test site. Different central frequencies of investigation were employed herein. The results have confirmed the reliability of GPR in detecting the variation of moisture in a progressively saturated medium. With regard to those applications focused on leak detection in underground pipes, it is worthwhile mentioning the study performed by Cataldo et al. BIB013 , wherein the potential of different geophysical methods suited for purposes was evaluated. To this aim, TDR, GPR, and electrical resistivity tomography (ERT) were applied in both laboratory and on-field environment to water pipes differently leaked. The GPR device was equipped with a double set of antennas, with central frequencies of 200 and 600 MHz. GPR and TDR were found to be reliable tools for detecting water leakage spots. Nevertheless, the authors have reported the misleading impact on the GPR signal of some potential buried objects. A further peculiarity of the Italian territory consists in the close relationship between its hydrogeological complexity and the capillary character of the transport network, which makes the management of water-retaining structures a crucial issue to be tackled. Several studies have then investigated the potential of GPR in assessing the status of river embankments. Di Prinzio et al. BIB010 analyzed the reliability of GPR in detecting the presence of voids and discontinuities in levees and river embankments, which effectively represent a comprehensive strategy to localize early-stage damages. To this purpose, surveys along several kilometers of two embankments situated nearby the Italian town of Bologna were carried out using a GPR unit with a low central frequency of investigation, i.e., 250 MHz. The authors were able to clearly identify void spots, consisting in general of animal burrows, despite the interference of several factors affecting the quality of the data collected, such as the dependence on earlier weather conditions or the presence of vegetation over the unmaintained embankments. Moreover, it was also highlighted how the use of a suitable and unique central frequency of investigation may represent a critical issue, especially when evaluating targets at different depths. Such topic was indeed tackled by Perri et al. BIB014 by comparing data collected on the embankments of the Tagliamento river, near Venice, Italy, by using a 600 MHz central frequency GPR system, together with other geophysical tools. GPR has herein proved to be a relatively useful nondestructive technology, capable to support maintenance operations in hydraulic engineering great works. As far as the evaluation of soil moisture is concerned, Strobbia and Cassani have tackled the topic of moisture mapping in shallow and thin low-velocity soil layers. By implementing an inverse multilayer GPR waveguide model, it was endeavored to infer both the wave velocity within the medium and the layer thicknesses using a stochastic-based approach. A similar statistical approach was employed in further studies, wherein low-frequency GPR systems were used to reconstruct water content profiles in soils by performing cross-borehole zero offset profiles (ZOPs) , BIB005 .
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The reasons for damage to railroad tracks often lie in the subgrade. At present investigations of ::: tracks are carried out selectively and schematically by drilling and digging (every 100 m). By using ::: the GPR it is possible to give a comprehensive assessment concerning the condition ofthe ::: complete profile ofthe track <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Monostatic ground penetrating radar (GPR) has proven to be a useful technique in pavement profiling. In road and highway pavements, layer thickness and permittivity of asphalt and concrete can be estimated by using an inverse scattering approach. Layer-stripping inversion refers to the iterative estimation of layer properties from amplitude and time of delay (TOD) of echoes after their detection. This method is attractive for real-time implementation, in that accuracy is improved by reducing false alarms. To make layer stripping useful, a multitarget detection/tracking (D/T) algorithm is proposed. It exploits the lateral continuity of echoes arising from a multilayered medium. Interface D/T means that both detection and tracking are employed simultaneously (not sequentially). For each scan, both detection of the target and tracking of the corresponding TOD of the backscattered echoes are based on the evaluated a posteriori probability density. The TOD is then estimated by using the maximum a posteriori (MAP) or the minimum mean square error (MMSE) criterion. The statistical properties of a scan are related to those of the neighboring ones by assuming, for the interface, a first-order Markov model. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> railroad track substructure condition on a continuous top-ofrail nondestructive basis. In this study, 1 GHz radar data were acquired between concrete and wood ties as well as from the ballast shoulders beyond the ends of the ties, and with multiple antenna orientations and polarizations. Automatic processing of the data was developed to quickly generate hard copy sections of radar images and for input into railroad track performance monitoring software such as ORIM. Substructure conditions were observed such as thickness of the ballast and sub ballast layers, variations in layer thickness along the track, pockets of water trapped in the ballast, and soft subgrade from high water content. In addition, locations and depths of subsurface drainage pipes, trenches, and utilities were quickly and continuously mapped. GPR data were acquired and processed from a hirail vehicle moving continuously at 10 miles per hour with radar resolution of a few inches horizontally and a fraction of an inch vertically to depths of more than six feet. The largest errors resulted from the positioning system used to locate the antennas along and across the track. Automatic modeling to determine density and water content is being developed but the uneven and rough (at radar wavelengths) air-ballast interface is a major problem in modeling the data. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Road pavement performances are of great importance for driving comfort and safety. Monitoring and rehabilitation activities are always extremely strategic and crucial. The points of strength of advanced non-destructive techniques for road pavement monitoring essentially are: (1) reliability, (2) significance in the space domain, (3) efficiency and (4) quickness. One of the most relevant and widely used technologies is the Ground Penetrating Radar. In the field of pavement analysis its most frequent applications are the evaluation of layers thickness and voids detection. Recent experimental results put also in light the capability of Radar to identify the causes of road damages. Empirical relationships between physical and mechanical characteristics of the materials and electromagnetic parameters have been seen, established and analytical functions were proposed. Most promising and interesting evidences regard the prediction of water content. It is crucially important because water intrusion in sub-grade is one of the most important causes of loss of mechanical properties. The empirical relationships have shown a conservative and comparable trend for different materials, status conditions and radar frequencies, but variable amplitudes. General mathematical laws could be very useful to analyze the Radar scans correctly and in a more comprehensive framework A stochastically based correction of semi-empirical approach is here proposed to correlate the geophysical characteristics of the pavement���s materials (sub-grade) to the par:rmeters of the empirical model. Average dimension of grains, grading, specific surface area of grains (that is related to the hygroscopic potential) and dielectric characteristics of the dry material are primarily taken into consideration. The impact of this geophysical and stochastical model on non-destructive measurements and on the pavement management is high and it is here discussed. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The possibility to estimate accurately the subsurface electric properties from ground-penetrating radar (GPR) signals using inverse modeling is obstructed by the appropriateness of the forward model describing the GPR subsurface system. In this paper, we improved the recently developed approach of Lambot et al. whose success relies on a stepped-frequency continuous-wave (SFCW) radar combined with an off-ground monostatic transverse electromagnetic horn antenna. This radar configuration enables realistic and efficient forward modeling. We included in the initial model: 1) the multiple reflections occurring between the antenna and the soil surface using a positive feedback loop in the antenna block diagram and 2) the frequency dependence of the electric properties using a local linear approximation of the Debye model. The model was validated in laboratory conditions on a tank filled with a two-layered sand subject to different water contents. Results showed remarkable agreement between the measured and modeled Green's functions. Model inversion for the dielectric permittivity further demonstrated the accuracy of the method. Inversion for the electric conductivity led to less satisfactory results. However, a sensitivity analysis demonstrated the good stability properties of the inverse solution and put forward the necessity to reduce the remaining clutter by a factor 10. This may partly be achieved through a better characterization of the antenna transfer functions and by performing measurements in an environment without close extraneous scatterers. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground penetrating radar (GPR) signal processing is a nondestructive technique, currently performed by many agencies involved in road management and particularly promising for soil characteristics interpretation. The focus of this paper is to assess the reliability of an optimal signal processing algorithm for pavement inspection. Preliminary detection and subsequent classification of pavement damages, based on an automatic GPR analysis, have been performed and experimentally validated. A threshold analysis of the error is carried out to detect possible damages and check if they can be predicted, while a second threshold analysis determines the nature of the damage. An optimum detection procedure is performed. It implements the classical Neyman-Pearson radar test. All the settings needed by the procedure have been estimated from training sets of experimental measures. The overall performance has been evaluated by looking at the usual receiver's operating characteristic. The results show that a reasonable performance has been achieved by exploiting the spatial correlation properties of the received signal, obtained from an appropriate analysis of GPR images. The proposed system shows that automatic evaluation of subgrade soil characteristics by GPR-based signal analysis and processing can be considered reliable in a number of experimental cases. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Abstract The safety and operability of road networks is, in part, dependent on the quality of the pavement. It is known that pavements suffer from many different structural problems which can lead to damage to the pavement surface. To minimize the effect of these problems programmed policies for pavement management are required. Additionally a given local anomaly on the road surface can affect the safety of the road to various degrees according to the category of the road, so it is possible to set up different programmes of repair according to the different standards of road. Programmed policies for pavement management are required because of the wide structural damage which occurs to pavements during their normal operating life. This has consequences for the safety and operability of road networks. During the last decade, road networks suffered from great structural damage. The damage occurs for different reasons, such as the increasing traffic or the lack of means for routine maintenance. Many forms of damage, originating in the bottom layers are invisible until the pavement cracks. They depend on the infiltration of water and the presence of cohesive soil greatly reduces the bearing capacity of the sub-asphalt layers and underlying soils. On the basis of an in-depth literature review, an experimental survey with Ground Penetrating Radar (GPR) was carried out to calibrate the geophysical parameters and to validate the reliability of an indirect diagnostic method of pavement damage. The experiments were set on a pavement under which water was injected over a period of several hours. GPR travel time data were used to estimate the dielectric constant and the water content in the unbound aggregate layer, the variations in water content with time and particular areas where rate of infiltration decreases. A new methodology has been proposed to extract the hydraulic permittivity fields in sub-asphalt structural layers and soils from the moisture maps observed with GPR. It is effective at diagnosing the presence of clay or cohesive soil that compromises the bearing capacity of sub-base and induces damage. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The SAFE-RAIL system and the relevant processing unit are hereafter described as a success case for the solution of an electromagnetic inverse problem in real-time applications. The SAFE-RAIL system on-board processing unit is conceived for providing functionalities for real-time exploitation of raw data deriving from microwave sensing action through innovative GPR equipment. In particular, the main objective, as per European STREP project SAFE-RAIL statements, is focused on automatic interpretation of microwave sensed data relevant to rail-track subsurface, aiming at characterizing the ballast and sub-ballast layer properties with consequent extraction in real-time of geophysical parameters. A neural network based approach has been exploited as an efficient way for solving the inverse problem through a "learning-by-examples" approach. The capability of the SAFE-RAIL system in matching real-time performance requirements has been investigated. System operability and cost-effective implementation issues have also been deeply addressed. <s> BIB008 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground-penetrating radar (GPR) is a rapidly developing field that has seen tremendous progress over the past 15 years. The development of GPR spans aspects of geophysical science, technology, and a wide range of scientific and engineering applications. It is the breadth of applications that has made GPR such a valuable tool in the geophysical consulting and geotechnical engineering industries, has lead to its rapid development, and inspired new areas of research in academia. The topic of GPR has gone from not even being mentioned in geophysical texts ten years ago to being the focus of hundreds of research papers and special issues of journals dedicated to the topic. The explosion of primary literature devoted to GPR technology, theory and applications, has lead to a strong demand for an up-to-date synthesis and overview of this rapidly developing field. Because there are specifics in the utilization of GPR for different applications, a review of the current state of development of the applications along with the fundamental theory is required. This book will provide sufficient detail to allow both practitioners and newcomers to the area of GPR to use it as a handbook and primary research reference. *Review of GPR theory and applications by leaders in the field *Up-to-date information and references *Effective handbook and primary research reference for both experienced practitioners and newcomers <s> BIB009 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Abstract The evaluation of the water content of unsaturated soil is important for many applications, such as environmental engineering, agriculture and soil science. This study is applied to pavement engineering, but the proposed approach can be utilized in other applications as well. There are various techniques currently available which measure the soil moisture content and some of these techniques are non-intrusive. Herein, a new methodology is proposed that avoids several disadvantages of existing techniques. In this study, ground-coupled Ground Penetrating Radar (GPR) techniques are used to non-destructively monitor the volumetric water content. The signal is processed in the frequency domain; this method is based on Rayleigh scattering according to the Fresnel theory. The scattering produces a non-linear frequency modulation of the electromagnetic signal, where the modulation is a function of the water content. To test the proposed method, five different types of soil were wetted in laboratory under controlled conditions and the samples were analyzed using GPR. The GPR data were processed in the frequency domain, demonstrating a correlation between the shift of the frequency spectrum of the radar signal and the moisture content. The techniques also demonstrate the potential for detecting clay content in soils. This frequency domain approach gives an innovative method that can be applied for an accurate and non-invasive estimation of the water content of soils – particularly, in sub-asphalt aggregate layers – and assessing the bearing capacity and efficacy of the pavement drainage layers. The main benefit of this method is that no preventive calibration is needed. <s> BIB010 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Nowadays, severe meteorological events are always more frequent all over the world. This causes a strong impact on the environment such as numerous landslides, especially in rural areas. Rural roads are exposed to an increased risk for geotechnical instability. In the meantime, financial resources for maintenance are certainly decreased due to the international crisis and other different domestic factors. In this context, the best allocation of funds becomes a priority: efficiency and effectiveness of plans and actions are crucially requested. For this purpose, the correct localisation of geotechnically instable domains is strategic. In this paper, the use of Ground-Penetrating Radar (GPR) for geotechnical inspection of pavement and sub-pavement layers is proposed. A three-step protocol has been calibrated and validated to allocate efficiently and effectively the maintenance funds. In the first step, the instability is localised through an inspection at traffic speed using a 1-GHz GPR horn launched antenn... <s> BIB011 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> A recent approach relates the shift of the frequency peak of the Ground Penetrating Radar (GPR) spectrum with the increasing of the moisture content in the soil. Theweakness characterizing this approach is represented by the needs of high resolution signals, whereas GPR spectra are affected by low resolution. The novelty introduced by this work is twofold. First, we evidence that clay content information is present in the location where the maximum amplitude of the GPR spectra occurs. Then, we propose three super resolution methods, namely parabolic, triangular, and sinc-based interpolators, to further refine the location of the frequency peak. In fact, it is really important to be able to find this location quite precisely, to obtain accurate estimates of clay content. We show that the peak location can be found best through sinc-interpolation in the frequency domain of the measured data. Our experimental results confirm the effectiveness of the proposed approach to resolve a frequency shift in the GPR spectrum, even for a small amount of clay. <s> BIB012 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The electric properties of multiphase aggregate mixtures are evaluated for a given mineralogic composition at frequencies between 300 kHz and 3 GHz. Two measurement techniques are employed: a coaxial transmission line and a monostatic stepped-frequency ground-penetrating radar (GPR). The effect of increasingwater content is analyzed in several sand clay mixtures. For the end-member case of maximum clay (25% in weight) and increasing water content, investigations are compared between the twomeasurement techniques. The electrical properties of materials are influenced by the amount ofwater, but clay affects the frequency dependency of soils showing distinctive features regardless of themineralogy. The microwave attenuation, expressed by the quality factor Q, is partly dependent on frequency and on the water content. The performance of one empirical and one volumetric mixing model is evaluated to assess the capability of indirectly retrieving the volumetric water content for a known mixture. The results are encouraging for applications in the field of pavement engineering with the aim of clay detection. The models used show similar behaviors, but measured data are better modeled using third order polynomial equations. <s> BIB013 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> In this paper, the correlation between the dielectric and the strength properties of unbound materials is analyzed, considering that mechanical characteristics of soil depend on particle interactions and assuming that dielectric properties of materials are related to bulk density. The work investigates this topic using ground-penetrating radar (GPR) techniques. In particular, two ground-coupled GPR are used in laboratory and in field experiments to infer the bearing ratio of soil in runway safety areas (RSA). The procedure is validated through CBR tests and in situ measurements using the light falling weight deflectometer (LFWD). A promising empirical relationship between the relative electric permittivity and the resilient modulus of soils is found. The comparison between measured and predicted data shows a reliable prediction of Young's modulus, laying the foundation for inferring mechanical properties of unbound materials through GPR measurements. <s> BIB014 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Nowadays, financial resources for maintenance have certainly decreased in many fields of application due to the Global Economic Crisis. In this context, the need for high performing inspections in pavement engineering has become a priority, and the use of non-destructive techniques has increased. In that respect, ground-penetrating radar (GPR) is proving to be as one of the most promising tools for retrieving both physical and geometrical properties of pavements. In this study, an off-ground GPR system, 1-GHz centre frequency of investigation, was used for surveying a large-scale rural road network. Data processing was aimed to accurately identify the geometry of pavement layer interfaces. Results showed the high effectiveness and efficiency of such GPR system and procedure. The high productivity, approximately 160 km/day, along with the capability to identify mismatches in layers arrangement, even in case of undisclosed defects, demonstrated the importance of such technique in road inspections. <s> BIB015 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> It is well known that road safety issues are closely dependent on both pavement structural damages and surface unevenness, whose occurrence is often related to ineffective pavement asset management. The evaluation of road pavement operability is traditionally carried out through distress identification manuals on the basis of standardized comprehensive indexes, as a result of visual inspections or measurements, wherein the failure causes can be partially detected. In this regard, ground-penetrating radar (GPR) has proven to be over the past decades an effective and efficient technique to enable better management of pavement assets and better diagnosis of the causes of pavement failures. In this study, one of the main causes (i.e. subgrade failures) of surface damage is analyzed through finite-difference time-domain (FDTD) simulation of the GPR signal. The GprMax 2D numerical simulator for GPR is used on three different types of flexible pavement to retrieve the numerical solution of Maxwell's equations in the time domain. Results show the high potential of GPR in detecting the causes of such damage. <s> BIB016 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Ground-penetrating radar (GPR) is a wide ranging non-destructive tool used in many fields of application including effective pavement engineering surveys. Despite the high potential and the consolidated results obtained over the past decades, pavement distress manuals based on visual inspections are still widely used, so that only the effects and not the causes of faults are generally considered. In such context, simulation can represent an effective solution for supporting engineers and decision-makers in understanding the deep responses of both revealed and unrevealed damages. In this study, the use of FDTD simulation of the GPR signal is analyzed by simulating three different types of flexible pavement at two different center frequencies of investigation commonly used for road surveys. Comparisons with the undisturbed modelled pavement sections are carried out showing promising agreements with theoretical expectations, and good chances for detecting the shape of damages are demonstrated. <s> BIB017 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Over the last few years ground-penetrating radar (GPR) has proved to be an effective instrument for pavement applications spanning from physical to geometrical inspections of roads. In this paper, the new challenge of inferring mechanical properties of road pavements and materials from their dielectric characteristics was investigated. A pulsed GPR system with ground-coupled antennas, 600 MHz and 1600 MHz center frequencies of investigation, was used over a 4 m×30 m test site with a flexible pavement structure. A spacing of 0.40 m between the GPR acquisition tracks was considered both longitudinally and transversely in order to configure a square regular grid mesh of 836 nodes. Accordingly, the Young's modulus of elasticity was measured on each grid node using light falling weight deflectometer (LFWD). Therefore, a semi-empirical model for predicting strength properties of pavement was developed by comparing the observed elastic modulus and the electromagnetic response of substructure on each grid node. A good agreement between observed and modeled values was found, thereby showing great promises for large-scale mechanical inspections of pavements using GPR. <s> BIB018 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> In order to evaluate the level of ballast fouling for Portugal aggregates and the influence of antenna frequency on its measurement several laboratory tests were performed on different materials. Initially the clean granitic ballast was tested in different water content conditions, from dry to soak in order to see the influence of water on the dielectric characteristics. The fouling of the ballast was reproduced in laboratory through mixing the ballast with soil, mainly fine particles, in order to simulate the fouling existing in several old lines in Portugal, where the ballast was placed over the soil without any sub ballast layer. Five different fouling levels were reproduced and tested in laboratory, with different water contents, four for each fouling level. Tests were performed with five Ground Penetrating Radar (GPR) antennas with different frequencies, three ground coupled antennas of 400 MHz, 500 MHz and 900 MHz, and two horn antennas of 1000 MHz and 1800 MHz. In situ test pits were than used to validate the values of the dielectric constants obtained in laboratory. The main results obtained are presented in this paper together with troubleshooting associated to measurement on fouling ballast. This study is of interest for COST Action TU 1208. <s> BIB019 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> The characterization of shallow soil moisture spatial variability at the large scale is a crucial issue ::: in many research studies and fields of application ranging from agriculture and geology to civil and ::: environmental engineering. In this framework, this work contributes to the research in the area of ::: pavement engineering for preventing damages and planning effective management. High spatial ::: variations of subsurface water content can lead to unexpected damage of the load-bearing layers; ::: accordingly, both safety and operability of roads become lower, thereby affecting an increase in ::: expected accidents. A pulsed ground-penetrating radar system with ground-coupled antennas, i.e., 600-MHz and 1600-MHz center frequencies of investigation, was used to collect data in a 16 m × 16 m study site in the Po Valley area in northern Italy. Two ground-penetrating radar techniques were employed to nondestructively retrieve the subsurface moisture spatial profile. The first technique is based on the evaluation of the dielectric permittivity from the attenuation of signal amplitudes. Therefore, dielectrics were converted into moisture values using soil-specific coefficients from Topp’s relationship. Groundpenetrating-radar-derived values of soil moisture were then compared with measurements from eight capacitance probes. The second technique is based on the Rayleigh scattering of the signal from the Fresnel theory, wherein the shifts of the peaks of frequency spectra are assumed comprehensive indicators for characterizing the spatial variability of moisture. Both ground-penetrating radar methods have shown great promise for mapping the spatial variability of soil moisture at the large scale. <s> BIB020 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> B. Transport Infrastructures <s> Clay content is one of the primary causes of pavement damages, such as subgrade failures, cracks, and pavement rutting, thereby playing a crucial role in road safety issues as an indirect cause of accidents. In this paper, several ground-penetrating radar methods and analysis techniques were used to nondestructively investigate the electromagnetic behaviour of sub-asphalt compacted clayey layers and subgrade soils in unsaturated conditions. Typical road materials employed for load-bearing layers construction, classified as A1, A2, and A3 by the American Association of State Highway and Transportation Officials soil classification system, were used for the laboratory tests. Clay-free and clay-rich soil samples were manufactured and adequately compacted in electrically and hydraulically isolated formworks. The samples were tested at different moisture conditions from dry to saturated. Measurements were carried out for each water content using a vector network analyser spanning the 1 GHz–3 GHz frequency range, and a pulsed radar system with ground-coupled antennas, with 500-MHz centre frequency. Different theoretically based methods were used for data processing. Promising insights are shown to single out the influence of clay in load-bearing layers and subgrade soils, and its impact on their electromagnetic response at variable moisture conditions. <s> BIB021
This Section reviews the major uses of GPR in Italy in transport engineering by fields of application, namely, roads, railways, and airports. A further subsection will be devoted to critical transport infrastructures, such as bridges and tunnels, whose strategic importance deserves a separate discussion. 1) Roads: According to Saarenketo , GPR road applications can be broadly divided into four main categories, namely: 1) surveys needed in designing new roads; 2) surveys carried out for the rehabilitation design of existing roads; 3) quality control or quality assurance surveys in road projects; and 4) surveys carried out for pavement management systems. Worldwide, there is a remarkable number of works dealing with the application of GPR in roads and streets . In Italy, it is worth noting that most of the freight and passenger transport takes place on the road. The results of national inquiries depict a broadly extended road network, with increasing traffic volumes , . Such peculiarities have favored the use of GPR especially in roads more than in other transport infrastructures (see Fig. 3 ), whereby a considerable number of applications can be found for both subgrade soils, unbound and bound pavement layers. The Italian GPR-related research focused on the assessment of the physical properties of subgrade soils and load-bearing layers has been very fruitful since the early noughties, when Benedetto and Benedetto presented a semiempirical approach for the evaluation of the relative dielectric permittivity of subgrade soils, based on a Gauss function which takes into account the relative dielectric permittivity of both the dry and the saturated material, as well as its particle size properties. One multi-frequency GPR system with ground-coupled antennas, 600 and 1600 MHz central frequency of investigation, was used for the laboratory tests on two soil types, which in turn were oven-dried and progressively wetted at several known water contents up to saturation. It was observed how a mono-granular soil has a tendency to change its relative dielectric permittivity more rapidly than a heterogeneous soil particle size, since a faster change from a viscous to a free water status may occur. This approach was later deepened by Benedetto BIB004 , who compared the results, in terms of ε r , achieved by testing four types of soils, with empirical and theoretical models. Starting from 2005, considerable efforts have been devoted toward the GPR-based evaluation of water content in subgrade soils and unbound pavement layers. Fiori et al. investigated the relationship between the relative dielectric permittivity of soils and their volumetric water content. The effective permittivity of the soil was here derived as a function of the water content by using the effective medium approximation (EMA) technique after modeling the porous medium as a multiindicator structure with spherical elements of variable radii R. The derived formula was tested against controlled laboratory experiments and it has shown that the approximated relationship behaves quite well in a broad range of water contents θ, being the R-squared value R 2 0.98. More recently, Benedetto and Pensa BIB007 have carried out a GPR-based experimental survey for calibrating a number of geophysical parameters and validating the reliability of an indirect diagnostic method for the detection of pavement damages. Water was here injected within a flexible pavement structure over a period of several hours. The dielectric constant and the water content in the unbound aggregate layer were estimated by the GPR travel time data, as well as the variations of water content in time and the critical areas with low rates of water infiltration. Such approach has proved to be effective at diagnosing the presence of clay and the cohesive nature of certain soils that may compromise the bearing capacity of load-bearing layers and induce structural damage. A step beyond the common practices established in Italy and worldwide for moisture sensing with GPR in typical subgrade soils was done in 2010 by Benedetto BIB010 , who processed the GPR signal in the frequency domain on the basis of the Rayleigh scattering principles, according to the Fresnel theory. The main assumption relies on the fact that in unsaturated soils the water droplets are capable to scatter EM waves , thereby an extra-shifting of the central frequency of the wave spectra can be added to the one which is mainly related to the medium prop-erties , . In line with this, several relationships were provided between the shift of the peak and the water content for different types of soil under controlled laboratory conditions. The approach has been validated at the whole range of investigation scales BIB011 - BIB020 , providing good results and promising applicability. Much more recently, the Italian contribution on the GPR use for preventing structural damages in load-bearing layers has been focused on the possibility to detect and quantify clay content BIB012 . It is well-known that clay presence is closely related to moisture, due to its considerable swelling properties [73] , thereby it is capable to exert significant effects on the stability of the soil behavior under loading. In this regard, Tosti et al. have employed different GPR methods and techniques to nondestructively investigate the clay content in sub-asphalt soil samples compacted in laboratory environment. The experimental layout has provided the use of three types of soil with progressively increasing percentage of bentonite clay, and two different GPR instruments were used for the EM measurements at each step of clay content. In particular, a ground-coupled pulsed radar system, 500 MHz central frequency, and a vector network analyzer (VNA) spanning the 1-3 GHz frequency range were here employed. The signals collected were processed using the Rayleigh scattering method, the full-wave inversion technique BIB005 , and the time-domain signal picking technique. Overall, promising results were achieved for the detection of clay. The electrical behavior of clayey soil samples was also investigated at a smaller scale by Patriarca et al. BIB013 using two measurement techniques, namely, a coaxial transmission line and a monostatic stepped frequency GPR. The effect of growing water contents was analyzed for several sand-clay mixtures. The results from the two measurement techniques were compared for the end-member case of maximum clay, namely, 25% in weight, with water contents growing progressively up to saturation. It was confirmed the high impact of water over the electrical properties of materials, and it was also proved how the frequency dependence of the soils investigated is sensitive to the presence of clay, by showing distinctive features regardless of the soil mineralogy. Such results were confirmed by a similar experience conducted by Tosti et al. BIB021 with different GPR systems. Within the Italian contribution in the GPR-based research focused on the bound structure of a road pavement, an inverse scattering approach for pavement profiling was presented by Spagnolini and Rampa BIB002 , who determined layer thickness and permittivity of the asphalt. Benedetto et al. BIB006 proposed a study for assessing the reliability of an optimal signal processing algorithm for pavement inspections. Basically, the analyses were carried out as a function of two thresholds, with the first one set for taking into account the error of detecting possible damages and checking their predictability, and the second one for determining the nature of the damage. An optimum detection procedure implementing the classical Neyman-Pearson radar test was performed. A reasonable performance has been achieved by exploiting the spatial correlation properties of the signal received, as a result of a proper analysis of the GPR images. In Tosti et al. BIB015 an off-ground GPR system, 1 GHz central frequency of investigation, was employed for a large-scale investigation along an extra-urban road network. Homogeneous pavement sections were singled out according to a comprehensive checklist of elements of practical use for GPR end users. Useful advices on the system setup and calibration procedures are also given by the authors. The GPR system showed a very high productivity and a good effectiveness in detecting several causes of pavement damages. Tosti and Umiliaco BIB016 and Benedetto et al. BIB017 investigated the possibility of simulating different types of pavement damages. The authors performed finite-difference time-domain (FDTD) simulations of the GPR signal on three different types of flexible pavement using two central frequencies of investigation, i.e., 600 and 1600 MHz, commonly employed in road surveys. Regular-and irregular-shaped faults within hot-mix asphalt (HMA) layers and at the base-subbase interface, as well as potholes on the surface were here simulated by the gprMax2D numerical simulator BIB018 . Much more recently, Tosti et al. BIB017 proposed a promising semiempirical amplitude-based model for inferring the mechanical properties of road pavements and materials from their dielectric characteristics. For calibrating the model, the authors employed ground-truth data arising from the use of a light falling weight deflectometer (LFWD). 2) Railways: GPR applications in railway engineering have experienced a huge advancement especially since the 90s. Overall, they can be divided into three main categories, namely: 1) ballast surveys; 2) geotechnical investigations; and 3) structural quality assurances of new nonballasted rail track beds . To the best of our knowledge, there are no significant GPRrelated contributions worldwide concerning railway applications until 1994, when Göbel et al. BIB001 carried out some experimental tests to measure the ballast thickness and locating mudholes and ballast pockets, as well as for defining the soil boundaries of the subgrade. In addition, Saarenketo argued that GPR was tested in some Finnish railways in the mid-80s, although the results were not very encouraging due to a difficult data collection process and several processing problems. GPR has then started to become a technology acknowledged among railway engineers from the mid-90s , BIB003 . According to literature statistics, Italy holds a total of 16 742 km of rail network, being 11 931 km electrified and 4811 km not electrified . National statistics point out how the railway transport in Italy can be considered secondary to road transport, and to other European countries' railway network. This is one of the reasons why GPR applications in Italy in this field can count on a lower number of contributions, which in turn have started later than in the rest of Europe, especially if compared to North European countries. The Italian contribution in this area can be traced back to 1999 when IDS Ingegneria dei Sistemi carried out some pilot tests along an Italian high-speed railway track BIB008 . According to the results achieved, the same company developed an array of multi-frequency antennas wherewith it was possible to single out several damaging occurrences. With similar purposes, Caorsi et al. BIB008 developed a railway ballast inspection system capa-ble to extract relevant geophysical parameters in real time. A neural-network-based method was exploited herein for solving the EM inverse problem through a "learning-by-examples" approach. More recently, research efforts have been devoted to the EM characterization of the ballast material, mostly to analyze its response in case of fouling, whose occurrence leads to a drastic loss of performance. In line with this, Fontul et al. BIB019 have verified the dielectric values of the railway ballast used in Portuguese railways under controlled laboratory conditions through a multi-frequency GPR test, with the main goal of improving the GPR interpretation of the health conditions of railways. Additionally, GPR measurements and some test pits were performed in situ for validating the dielectric permittivity values of a clean ballast achieved preliminarily in laboratory environment. 3) Airports: The international scenario of literature publications on the use of GPR in airfield environment can count on a lower number of contributions than in other fields of application. Several possible applications to this purpose can be broadly mentioned, namely: 1) the locating of voids and moisture trails in concrete runways and taxiways; 2) the locating of posttensioning cables in concrete elements, such as garages or bridges inner to the airports; 3) the detection of voids and delamination of concrete roofs; 4) the reconstruction of cables, conduits, and rebars geometry in concrete pavements; 5) the location of buried utilities and their leaks; and 6) quality control and quality assurance surveys BIB009 . As of 2014, 45 International airports can be counted in Italy, which serve more than 150 million of passengers moving from, to, and within its territory by plane . The maintenance of airfields, and especially of runways and taxiways, is an issue always more perceived by airport administrations, in terms of both social and economic impacts. Many of the main international airports are providing technologies capable to predict effectively and reliably the evolution of damages in runway and taxiway pavements. Despite of their potential, there is still not a rightful number of research activities in this field. Benedetto and Tosti BIB014 have faced the topic of the GPRbased characterization of the strength and deformation properties of the unpaved natural soils, which constitute the so-called runway safety areas (RSA). To this purpose, deflectometric and GPR tests were carried out in both laboratory and field environment, at the Roma Urbe Airport, Rome, Italy. The GPR device employed here was a ground-coupled 600 and 1600 MHz pulsed system, whereas information about the strength parameters were gathered on the field by using a LFWD, and by performing California bearing ratio (CBR) tests in laboratory environment. The authors first related the dielectric permittivity values of the soil investigated with its bulk density. The Young's elasticity modulus was then predicted by implementing a semiempirical model, based on theoretical arguments and validated using groundtruth data. Relatively good results were achieved, although the authors suggest the need for widening the range of surveyed materials and investigating the soil behavior under different known moisture conditions.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> On request of the Italian National Electrical Agency, the Company IDROGEO carried out a ::: G.P.R. survey inside an old water-supply tunnel 14 km long belonging to an hydroelectric ::: power plant located in the North East of Italy. The aim of the survey was the geo-structural ::: investigation of the rock formations surrounding the tunnel with particular interest in the ::: mapping of cavities and fractures associated to the water occurrences and circulation. ::: A detailed investigation was also requested to detect the presence of voids at the concrete-rock ::: interface. The tunnel crosses different rock formations belonging to the Alpine sequence with ::: the presence of evaporitic formations affected by strong tectonic deformations. More than ::: 7,000 meters of G.P.R. profiles were recorded by using a GSSI SIR 10 equipped with 100 and ::: 500 MHz antennas with simultaneous data recording on two channels. The survey at 500 MHz ::: .was aimed at the precise determination of the concrete thickness and at the detection of the ::: voids at the concrete-rock interface, whereas the use of 100 MHz transducers permitted the ::: detection of larger unconformities and cavities up to a distance of 15-20 metres. The identified ::: structural elements were divided into 5 groups: ::: - lack of contact and delaminations at the concrete-rock interface ::: - geostructural elements ::: - open fractures ::: - voids and unconformities ::: - honeycomb alterations ::: The survey also permitted the location of some old artifacts whose position and nature were ::: uncertain. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> Abstract An integrated interpretation was made of data, from ground penetrating radar (GPR), seismic refraction and seismic transmission tomography, collected inside the catchment tunnels of a potable water source in central Italy. Rock fracturing and obsolescence of the concrete lining in a tunnel led a landslide that caused structural instability in the catchment work structures. To assess the stability of the rock close to the landslide, geophysical surveys were preferred to boreholes and geotechnical tests in order to avoid water pollution and the risk of further landslides. The interpretation of integrated data from seismic tomography and 200 MHz antenna GPR resulted in an evaluation of some of the elastic characteristics and the detection of discontinuities in the rock. Note also that an analysis of the back-scattered energy was required for the GPR data interpretation. The integration of seismic refraction data and 450 MHz antenna allowed us to identify the loosened zone around the tunnel and the extent of the mass involved in the cave-in, while GPR data from 225 MHz were used to evaluate the quality of contact between concrete lining and massive rock. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> 4) Bridges and Tunnels: <s> Abstract Corrosion associated with reinforcing bars is the most significant contributor to bridge deficiencies. The corrosion is usually caused by moisture and chloride ion exposure. The reinforcing bars are attacked by corrosion and yield expansive corrosion products. These oxidation products occupy a larger volume than the original intact steel and internal expansive stresses lead to cracking and debonding. There are some conventional inspection methods for the detection of the reinforcing bar's corrosion but they can be invasive and destructive, often laborious, and lane closure is required and it is difficult or unreliable for any quantification of corrosion. For these reasons, bridge engineers always prefer more to use the ground penetrating radar (GPR) technique. In this work a novel numerical approach for three dimensional tracking and mapping of cracks in the bridge is proposed. The work starts from some interesting results based on the use of the 3D imaging technique in order to improve the potentiality of the GPR to detect voids, cracks or buried objects. The numerical approach has been tested on data acquired on a bridge by using a pulse GPR system specifically designed for bridge deck and pavement inspection. The equipment integrates two arrays of Ultra Wide Band ground coupled antennas, having a main working frequency of 2 GHz. The two arrays are using antennas arranged with a different polarization. The cracks, associated often to moisture increase and higher values of the dielectric constant, produce a not negligible increase of the signal amplitude. Following this, the algorithm, organized in preprocessing, processing and postprocessing stages, analyzes the signal by comparing the value of the amplitude all over the domain of the radar scan. <s> BIB003
The lowering of the risk related to structural stability issues of transport lifelines, such as bridges or tunnels, is an important task that needs to be undertaken in order to avoid possible failures, which may lead to a lack of functionality and compromise the whole transportation network. In this framework, GPR can cover an important role in monitoring and assessing these infrastructures, due to its minimum interference with traffic whilst measuring and testing. The potential of GPR in bridge engineering was evaluated in and BIB003 . Aiming at developing a highly reliable algorithm for the 3-D tracking of cracks within the HMA layers of a bridge deck, the authors employed a bridge-dedicated GPR system consisting in two arrays of ground-coupled antennas with a central frequency of 2 GHz, to survey several bridges in the district of Rieti, Italy. The signals collected were processed and amplified, and a 3-D matrix of signal amplitude values was then realized. Therefore, an amplitude threshold was calibrated to localize the paths of the cracks, by comparing the evidence of cracks on the field with the radar reflections and by assuming higher amplitude values related to cracks. Still concerning bridge applications, it is worth to mention the work developed by Pucinotti and Tripodo in 2009, over a study-case bridge, situated in the district of Reggio Calabria, Italy. To this intent, different technologies were used. In particular, laser scanner technology (LST) and GPR were combined to reconstruct the surface and inner morphology of the structure, respectively. GPR showed a good reliability and efficiency in determining the geometry of steel reinforcements, the lacks of homogeneity, and the major damages. With respect to tunnel engineering, Cardarelli et al. BIB002 made use of GPR in an integrated approach, to assess the health state of a tunnel set aside for potable water, which caved-in due to a landslide. GPR and seismic surveys were carried out by availing of a twin tunnel, around 15 m far from the target. Three antennas were employed in a bistatic configuration, with a central frequency spanning from 200 to 450 MHz. The lower frequencies were especially useful in retrieving the number and the location of discontinuities, thereby indicating collapsed zones. The integration between GPR, seismic, and tomographic analysis led to minimize uncertain data and to infer useful information about the tunnel structural stability. In line with the purposes of the former work, Piccolo and Zanelli BIB001 reconstructed the state of the geostructure surrounding the lining of a tunnel designed for potable water conduction in the North-East of Italy. The authors surveyed 7 km of tunnel lining using a pulsed GPR system with 100 and 500 MHz central frequency antennas. The higher frequency allowed monitoring the lining thickness all over the scan length, whereas the lower one allowed detecting deeper inhomogeneities, e.g., cavities and cracks, up to 20 m distance.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> Ground penetrating radar (GPR) is one of the most suitable technological solutions for timely detection of damage and leakage from pipelines, an issue of extreme importance both environmentally and from an economic perspective. However, for GPR to be effective, there is the need of designing appropriate imaging strategies such to provide reliable information. In this paper, we address the problem of imaging leaking pipes from single-fold, multi-receiver GPR data by means of a novel microwave tomographic method based on a 2D "distorted" scattering model which incorporates the available knowledge on the investigated scenario (i.e., pipe position and size). In order to properly design the features of the approach and test its capabilities in controlled but realistic conditions, we exploit an advanced, full-wave, 2.5D Finite-Difference Time-Domain forward modeling solver capable of accurately simulating real-world GPR scenarios in electromagnetically dispersive materials. By means of this latter approach, we show that the imaging procedure is reliable, allows us to detect the presence of a leakage already in its first stages of development, is robust against uncertainties and provides information which cannot be inferred from raw-data radargrams or "conventional" tomographic methods based on a half-space background. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> The realization of network infrastructures with lower environmental impact and the tendency to use digging technologies less invasive in terms of time and space of road occupation and restoration play a key-role in the development of communication networks. The "low impact mini-trench" technique (addressed in ITU L.83 recommendation) requires that non-destructive mapping of buried services enhances its productivity to match the improvements of new digging equipment. Therefore, the development of a fully automated and real-time 3D GPR processing system plays a key-role in overall optical network deployment profitability. We propose a novel processing scheme whose goal is the automated processing and detection of buried targets, that can be applied in real-time to 3D GPR array system (16 antennas, 900 MHz central frequency). After the standard pre-processing steps, the antenna records are continuously focused during acquisition, by the mean of Kirchhoff depth-migration algorithm, to build pre-stack reflection angle gathers G(x, θ; v) at nv different velocities. The analysis of pre-stack reflection angle gathers plays a key-role in automated detection: by the mean of correlation estimate computed for all the nv reflection angle gathers, targets are identified and the best local propagation velocities are recovered. The data redundancy of 3D GPR acquisitions highly improves the proposed automatic detection reliability. The proposed approach allows to process 3D GPR data and automatically detect buried utilities in real-time on a laptop computer, without the need of skilled interpreters and without specific high performance hardware. More than 100 Km of acquired data prove the feasibility of the proposed approach. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> C. Underground Utilities <s> This paper describes a project entitled 'ORFEUS', supported by the European Commission's 7th Framework development programme. Horizontal directional drilling (HDD) offers significant benefits for urban environments by minimising the disruption caused by street works. Use of the technique demands an accurate knowledge of underground utility assets and other obstructions in the drill path. This project is aimed at improving the results of a previous project developed under the 6th Framework programme; specifically it addresses some issues that were formerly unresolved, in order to produce a commercially viable product. In fact, ORFEUS activities concern the research of the optimum antenna configuration, the design of an angular position sensor and a communication module, as well as the identification/validation of the most effective bore-head GPR data processing algorithms. The final system is expected to offer the operator information directly from the drilling head, in real time, allowing objects to be avoided; this is a unique feature that will enhance safety and efficiency, reduce risk, reduce the environmental impact (e.g. damage to natural habitats, less CO2 emissions) and lead to positive economic benefits in terms of cost and time savings for the operator, manufacturers and wider supply chain. <s> BIB003
In most cases, the complexity of a network of underground pipes can carry telecommunication or electric cables, natural gas, potable water, and wastewater, but it can also develop as an underground oil pipeline or a tunnel network . Several studies from the 90s have been carried out worldwide for detecting and identifying the underground utilities. In Italy, a general lack of regulations and technical protocols in this matter has determined a really chaotic and uncontrolled use of the underground for the location of utilities. Consequently, it is not uncommon that roadworks get slowed by the damaging of unexpected utility pipes. A first national regulatory impulse dates back to 1999, when the Italian Ministry of Public Works promulgated a directive encouraging public administrations to adopt an urban plan for the management of underground utilities. To this aim, several big Italian municipalities such as Milan and Venice have already redacted their own urban plan. This occurrence has generated an important impulse for the national research in the field of GPR for detecting and classifying underground utilities. In this framework, one of the first initiatives is represented by the European co-funded projects GIGA and ORFEUS [98] , BIB003 . Among their main objectives, it was foreseen the design and manufacturing of an improved, user-friendly GPR capable to provide highly-detailed information for no-dig installation of gas pipelines by means of horizontal directional drillings (HDD). The possibility of gathering and interpreting the data in realtime holds a crucial role in the optimization process of costs and time efforts. In this sense, different studies proposing new integrated approaches for the processing of 3-D GPR data have been developed , BIB002 . These approaches make use of typical seismic algorithms, and build prestack reflection gathered by depth-migration processes for different propagation velocities. Such an operation allows estimating with high reliability both the position of scatterers and the propagation velocity of the EM wave. Under a different perspective, the noninvasive and efficient detection of underground utilities can also hold a crucial environmental role. Indeed, since the cost of energy and water resources kept raising, an early-stage location of leaks in underground pipes can avoid economic and environmental wastes. In such a framework, Crocco et al. BIB001 proposed a tomographic approach for detecting leaking pipes. The authors were capable to obtain detailed information about metallic leaking pipes, by employing a "distorted" wave scattering model and generating synthetic GPR data with a 2.5-D FDTD forward modeling solver.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Existing 1D and 2D models are used to simulate ground penetrating radar (GPR) field surveys conducted in a stratified limestone terrain. The 1D model gave good agreement in a simple layered section, accounting for multiple reflections, velocity variations and attenuation. The 2D F-K model used gave a good representation of the patterns observed due to edge diffraction from a fracture in limestone, although the model could not account for the attenuation caused by irregular blocks filling the fracture. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract This work studies a methodology starting from georadar data that allows a semiquantitative evaluation of massive rock quality. The method is based on the concept that in good quality rock, most of the energy is transmitted, while in low quality rock, the energy is backscattered from fractures, strata joints, cavities, etc. When the energy loss due to spherical divergence and attenuation can be recovered by applying a constant spherical/exponential gain, the resulting energy function observed in the georadar section depends only on the backscattered energy. In such cases, it can be assumed that the amount of energy is an index of rock quality. Radar section interpretation is usually based on the reconstruction of reflected high-energy organized events. Thus, no consideration is given to backscattered not-organized energy produced by microfractures that greatly influences the geotechnical characteristics of the rock mass. In order to take into consideration all the backscattered energy, we propose a method based on the calculation of the average energy relative to a portion of predefined rock. The method allows a synthetic representation of the energy distributed throughout the section. The energy is computed as the sum of the square of amplitude of samplings contained inside cells of appropriate dimensions. The resultant section gives a synthetic and immediate mapping of rock quality. The consistency of the method has been tested by comparing georadar data acquired in travertine and limestone quarries, with seismic tomography and images of actual geological sections. The comparison highlights how effectively the energy calculated inside the cells give synthetic representation of the quality of rocks; this can result in maps where the high-energy values correspond to rock of poor quality and the low energy values correspond to a good quality region. The results obtained in this way can, in this case, be partly superimposed onto those of seismic tomography. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> The joint application of electromagnetic techniques for near-surface exploration is a useful tool for soil pollution monitoring and can also contribute towards describing the spatial distribution of pollutants. The results of a geophysical field survey that was carried out for characterizing the heavy metal and waste disposal soil pollution phenomena in the industrial area of Val Basento (Basilicata region, Southern Italy) are presented here. First, topsoil magnetic susceptibility measurements have been carried out for defining the spatial distribution of superficial pollution phenomena in the investigated area. Second, detailed and integrated measurements based on a high-resolution magnetic mapping and ground probing radar (GPR) profiling have been applied to investigate the subsurface in two industrial areas located in more polluted sites that were identified during the first phase. Our monitoring strategy discloses the way to rapidly define the zone characterized by high pollution levels deriving from chemical industries and traffic emissions and to obtain the way information about the presence of local buried sources of contamination. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract A ground penetrating radar (GPR) survey was conducted across the Quaternary intramountain graben of the Norcia basin (Italy) in an effort to locate an active fault zone and to investigate the shallow geological structures. Measurements over an exposed (trenched) fault identify a radar signature consisting of hyperbolic diffractions in correspondence with the main fault's position. The migrated profile shows a good spatial correlation with the known fault at a control site. The average wave velocity of radar impulses in the ground was obtained by comparing with the field scans of real traces two synthetic signals, one at the hanging wall and one at the footwall of the fault. This analysis made possible to estimate the thickness of the sedimentary layers involved in the fault mechanism and the stratigraphic throw of the fault itself. The combined use of GPR across the probable northern continuation of the fault, with the information obtained with the study in the exposed fault, was used to select the location of new trench excavations. The GPR, being a relatively easy, non-invasive and high-resolution technique, can thus be used in palaeoseismological investigations, particularly for a preliminary investigation where the geological context is poorly defined. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Three-dimensional assessment and modelling of fractured rock slopes is a challenging task. The reliability of the fracture network definition is of paramount importance for several engineering and geotechnical applications, and so far, different approaches have been proposed to improve the assessment procedure. A thorough knowledge of the actual fracture system is necessary to construct an accurate geometrical model of the rock mass and to determine block size distribution within the rock body. This paper describes the integration of diverse techniques used to define the rock mass fracture pattern, focusing on the most important fracture features, which are joint orientation, spacing, and persistence. A case study in the north of Italy was selected in order to show the potential of an integrated approach where surface and subsurface investigations are coupled. The rock surface was analysed by means of both standard geological mapping and terrestrial laser scanning. Ground penetrating radar surveys were conducted to image and map the discontinuity planes inside the rock mass and to estimate fracture persistence. The results obtained from the various investigation methodologies were employed to construct a model of the rock mass. This approach may lead to a better understanding of fracture network features, usually observed only on the rock surface. A careful analysis of block size distribution in a rock body can be of valuable help in several engineering and risk mitigation applications. <s> BIB006 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> Abstract In this work we report a GPR study across a tectonic discontinuity in Central Italy. The surveyed area is located in the Castelluccio depression, a tectonic basin in the Central Apennines, close to the western border of the Mt. Vettore. Its West flank is characterised by a set of W-dipping normal faults, considered active and capable of generating strong earthquakes (M w = 6.5, Galli et al., 2008 ). A secondary fault strand, already studied with paleo-seismological analysis ( Galadini and Galli, 2003 ), has been observed in the Quaternary deposits of the Prate Pala alluvial fan. We first defined the survey site using the data available in literature and referring to topographic and geological maps, evaluating also additional methodologies, such as orthophoto interpretation, geomorphologic analysis and integrating all the information in a GIS environment. In addition, we made extensive use of GPR modelling, reproducing the geometric characteristics of the inferred fault area and interpreting the synthetic profiles to recognise local geophysical indications of faulting on the radargrams. Finally, we performed a GPR survey employing antennas with different frequencies, to record both 2D Common Offset profiles and Common Mid Point (CMP) gathers for a more accurate velocity estimation of the investigated deposits. In this paper we focus on the evaluation of the most appropriated processing techniques and on data interpretation. Moreover we compare real and synthetic data, which allow us to better highlight some characteristic geophysical signatures of a shallow fault zone. <s> BIB007 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> D. Geology and Environment <s> High-frequency electromagnetic (EM) surveys have shown to be valuable techniques in the study of soil water content due to the strong dependence of soil dielectric permittivity with moisture content. This quantity can be determined by analyzing the average value of the early-time instantaneous amplitude of ground-penetrating radar (GPR) traces. We demonstrate the reliability of this approach to evaluate the shallow soil water content variations from standard fixed-offset GPR data by simulating the data over different likely EM soil conditions. A linear dipole model that uses a thin-wire approximation is assumed for the transmitting and receiving antennas. The homogenous half-space model is used to calculate the waveform instantaneous amplitude values averaged over different time windows. We analyzed their correlation with the soil surface dielectric parameters, and we found a clear inverse linear dependence on the permittivity values. Moreover, we evaluated how different kinds of noise affect this correlation, and we determined the influence of the electrical conductivity on the trace attributes. Finally, through a two-layered medium, we estimated the effect on the GPR signal of a shallow reflector, we analyzed how its presence can carry out inaccuracies in the soil surface dielectric permittivity estimation, and we determined the best time window to minimize these errors. <s> BIB008
Over the last decades, GPR has been used for a huge number of documented applications in the geological and environmental fields. Since a great part of the Italian territory is classified as seismically active, the geological hazard analysis BIB003 is one of the most important topics tackled in this field. In such a framework, the palaeoseismology [103] discipline can cover a crucial role, since it is capable to exploit signs of ancient earthquakes by stratigraphic analysis for evaluating the geological hazard of a certain territory. The first recognized Italian research activity on geological issues using GPR was carried out in the 90s by Pettinelli et al. BIB001 , with the aim of verifying the capability of one-dimensional (1-D) and 2-D EM models in reconstructing structural and stratigraphic soil features. The study gathers and cross-matches information coming from road cuts, scarp faces, and GPR surveys collected over a simple limestone stratified sequence, located nearby the city of Rovereto, in the South-Eastern Italian Alps. Both 1-D and 2-D models revealed to be in good agreement with the data collected, although the 1-D model has showed more difficulties in predicting the point of diffraction, such as the fractures in limestone. On the contrary, Orlando BIB002 has focused efforts on the detection of low quality rock areas, under the hypothesis that the volumes of rock showing fractures and cavities backscatter the energy transmitted by a GPR system. The author employed a GPR system equipped with a 200 MHz central frequency antenna, on different geological contexts, located in the middle part of the Appennini mountain chain. The results showed a good effectiveness of GPR in reconstructing the quality of the geological layers, according to the central frequency employed. In the same research area, it is also worth mentioning the study by Longoni et al. BIB006 . Concerning palaeoseismology, Pauselli et al. BIB005 adopted GPR techniques coupled with trench works to reach a direct and detailed level of information about palaeoseismic structures, nearby the town of Norcia, Italy. To this purpose, a groundcoupled GPR system was equipped with two antennas, 100 and 300 MHz central frequencies of investigation, and employed over two areas, located alongside the Norcia fault. GPR proved to be very effective as complementary tool with former trench works. The authors individuated a great applicability of GPR for a better planning of trenching sites and for enhancing the geological information collected in neighbor areas. Similar goals and methodologies were adopted with reliable outcomes in BIB007 . More insights about the GPR application in palaeoseismology can be found in . Next to the abundance in geological activities, Italy has also a strong agricultural tradition that has left, as heritage, more than 1.6 million of agricultural establishments [109] widespread over the whole territory of the country. In such a framework, it is clear how agricultural water management and soil water conservation bear a crucial role, with GPR being a primary tool for water content sensing, due to the influence exerted by water on the dielectric properties of soils. Di Matteo et al. BIB008 have faced the topic of relating shallow soil water content and surface dielectric parameters by performing numerical simulations. The authors showed a high correlation especially between the dielectric constant and the average envelope amplitude of the first portion of the GPR signal. In a country like Italy, wherein the agricultural tradition and the need of quality assurance for foods meet the difficulties of managing the industrial expansion and its related toxic refuses, a further critical issue is related on how to ensure a direct, rapid and noninvasive detection of pollution in soils. In this field, Chianese et al. BIB004 developed a GPR-related study case. With the purpose of characterizing the territory in terms of levels of soil pollution, the authors made use of geophysical surveys performed in the industrial area of Val Basento, in the Region of Basilicata, in Southern Italy. In more details, different magnetic devices were employed to measure the magnetic susceptibility and the gradient of the magnetic field, whereas a GPR system equipped with 200 and 400 MHz nominal frequency antennas was used to evaluate the subsurface EM behavior of those areas with higher magnetic susceptibility values. In general, the integrated use of magnetic and EM methods allowed to detect and characterize buried pollutant objects and highly attenuated zones, probably related to polluted soils.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Forward modeling of ground penetration radar is developed using exact ray‐tracing techniques. Structural boundaries for a ground model are incorporated via a discrete grid with interfaces described by splines, polynomials, and in the case of special structures such as circular objects, the boundaries are given in terms of their functional formula. In the synthetic radargram method, the waveform contributions of many different wave types are computed. Using a finely digitized antenna directional response function, the radar crosssection of buried targets and the effective area of the receiving antenna can be statistically modeled. Attenuation along the raypaths is also monitored. The forward models are used: “1” as a learning tool to avoid pitfalls in radargram interpretation, (2) to understand radar signatures measured across various engineering structures, and (3) to predict the response of cultural structures buried beneath important archaeological sites in Japan. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Subsurface georadar is a high-resolution technique based on the propagation of high-frequency radio waves. Modeling radio waves in a realistic medium requires the simulation of the complete wavefield and the correct description of the petrophysical properties, such as conductivity and dielectric relaxation. Here, the theory is developed for 2-D transverse magnetic (TM) waves, with a different relaxation function associated to each principal permittivity and conductivity component. In this way, the wave characteristics (e.g., wavefront and attenuation) are anisotropic and have a general frequency dependence. These characteristics are investigated through a plane-wave analysis that gives the expressions of measurable quantities such as the quality factor and the energy velocity. The numerical solution for arbitrary heterogeneous media is obtained by a grid method that uses a time-splitting algorithm to circumvent the stiffness of the differential equations. The modeling correctly reproduces the amplitude and the wavefront shape predicted by the plane-wave analysis for homogeneous media, confirming, in this way, both the theoretical analysis and the numerical algorithm. Finally, the modeling is applied to the evaluation of the electromagnetic response of contaminant pools in a sand aquifer. The results indicate the degree of resolution (radar frequency) necessary to identify the pools and the differences between the anisotropic and isotropic radargrams versus the source-receiver distance. <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Abstract This paper deals with the application of two different processing methods of the georadar data aimed at improving the results in the case of bad quality data. The georadar data are referred to two areas located in the Axum archaeological park (Ethiopia) and were acquired prior to the reinstallation of the returned Stele from Italy to the Ethiopian Government. In the area the schist formation is covered by an outcropping sandy silt formation about 6–8 m thick. The archaeological excavations, performed before the georadar data acquisition, revealed that tombs and catacombs were dug into the superficial layer. Because the complexity of the georadar data interpretation based on standard data processing, some of the collected measured data are also processed by an innovative microwave tomographic approach which permits to achieve clearer diagnostic results with respect to the classic radaristic techniques in 2D and 3D representation. We take into account the data acquired for the East stele 2 with 100 MHz antenna and in the parking area of the archaeological park with 200 MHz antenna. The data were acquired on profiles 1 m apart. Comparing the data processed with the two different approaches, we obtained an improvement of the vertical resolution and of the quality of image on time slices using the tomographic approach compared to the results obtained with the classic radar one. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> E. Archaeology <s> Abstract A fast and efficient subsurface radar imaging procedure, based on a multi-channel cart system, has been developed and tested within the framework of a large-scale archaeological investigation project in northern Italy. The tested cart comprises 14 closely-spaced dipoles, rotated by 45° with respect to the dragging direction, and allows unidirectional scanning operations. Using this approach, an area of approximately 75 000m 2 was surveyed daytime via recording of a dense grid of about 490km of radar profiles. Geo-referencing of the scanning trajectories was achieved operating a separate on-board differential Global Positioning System in real-time kinematic mode. In this configuration the final positioning error of the radar sweeps was less than 0.05m. The large amount of collected data, of the order of tens of GBytes, was processed, using an open-source software package, on a workstation-based environment. A set of specific codes was developed to fully automate the data processing and the image generation procedure. Critical steps during code development were the integration of positioning and radar data, the referencing of the single radar sweeps and the correction for changes in the spectral amplitude of the different channels. The processed data volume displays high signal coherency and reveals several well-defined reflectors, clearly visible both on vertical profiles and horizontal time slices. The plan of the Roman settlement could be revealed in detail proving the potential of the tested approach for assisting high-resolution archaeological investigations of large areas. <s> BIB005
GPR has earned a wide acknowledgment in the archaeological community over the past decades. From the 70s until now, burial tombs, historic buried chambers and graves, campsites, and pit abodes have been detected through GPR methods . The interpretations of the collected GPR data have often been supported by simulation processes from 2-D or 3-D models BIB001 . By statistics, Italy is identified as the country owning the highest amount of UNESCO "World Heritage" sites [114] . Nevertheless, between 2001 and 2011, the funds for culture allocated by the Ministry for Cultural Heritage and Activities have suffered a lowering of about the 20% of allocations. In such a frame, GPR can hold a key role for its well-known nondestructive and cost-effective features. Therefore, a lively research community focused on developing or improving the ongoing methodologies for archaeological GPR surveys is not surprising. Pipan et al. BIB003 performed a study wherein the tasks of locating buried targets in archaeological areas, and testing 2-D and 3-D multifold (MF) methods for the characterization of the shape and the dimension of unknown objects are faced. To this purpose, the authors have thoroughly surveyed an area situated in the archaeological park of Aquileia, in Northern Italy, by means of a wide offset range of common midpoint (CMP) analyses. A 3-D MF data acquisition was finally performed, yielding an increment of the signal to noise ratio parameter. Such methodology led to an indication of potential archaeological targets buried below the surface. Similar goals were pursued in the work of Basile et al. by employing GPR methods to characterize in detail the shallower high-attenuation layers of one urban area presumably interested by buried archaeological structures, located nearby the town of Lecce, in Southern Italy. While GPR did not yield reliably information about the positioning of historical walls made of the same calcarenite, due to the weak EM contrast, it showed good performance in detecting and reconstructing the shape and the size of a barrel-vault cavity, which was later on confirmed by excavations. Negri and Leucci applied GPR methods combined with ERT, to assess the possible presence of voids and cavities in the subsurface of the Temple of Apollo in Hierapolis, in the Lycus Valley, Western Turkey. 3-D GPR imaging allowed to detect artifacts located beneath the Temple of Apollo, while 2-D ERT imaging enabled to verify an active fault as it was suggested by geological, geomorphological and palaeoseismic former studies. With regard to the same archaeological site of Hierapolis, in Turkey, a similar integrated geophysical approach was adopted by Nuzzo et al. . Nevertheless, to the best of authors' knowledge, the first example in the Italian literature of a multimethod geophysical approach applied to archaeological surveys, dates back to 1999, when Sambuelli et al. carried out some integrated geophysical inspections on a Roman archaeological site, nearby the town of Biella, in Northern Italy. Orlando and Soldovieri BIB004 proposed two different processing methods for providing reliable interpretation of bad quality datasets. Such methods were applied to the archaeological case of the relocation of a Stele in the Ethiopian archaeological park of Axum, and consisted in a classic processing scheme and a microwave tomographic approach. By comparing these two approaches, the authors were capable to increase the quality of the information coming from 100 and 200 MHz GPR antennas. The lack of GPR data interpretation can be overstepped, on the other hand, by making use of EM numerical simulation, as proposed in BIB002 and BIB001 . In general, one of the main issues affecting archaeological GPR surveys, is the need for high-resolution data. This can be a critical point when referring to the optimization of time and costs. Francese et al. BIB005 proposed a possible solution to this problem by using a multichannel GPR system mounted on a cart and equipped with 14 antennas with a central frequency of 400 MHz. Such set up allowed the authors to survey in half a day a considerable wide area (around 75000 m 2 ) located in Northern Italy, in the archaeological site of "Le pozze," wherein the remains of a roman village were known to be buried. The data were then processed, with particular regard to the data georeferencing by GPS. In the end, the boundaries of the buried structures were detected, thereby allowing to draw a comprehensive map of the whole archaeological site.
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> A prototype of an ultra-high frequency radar system (2- 60Hz) has been developed at the RST laboratory within the framework of the HOPE project funded by the European Community and aimed at integrating three sensors (metaldetector, GPR and microwave radiometer) into a unique portable system for humanitarian demining. An advanced prototype of the GPR sensor assembled with the dual metal coil MD sensor has been recently tested at the outdoor facilities of the Joint Research Center in Ispra (Italy). The test field specifically prepared by JRC consists of a unique target scenario that is recreated under different type of soils and surface conditions. The target scenario includes different type of mines and false alarm targets like stone, wood, metallic or plastic objects. The dataset collected during this test are quite interesting for planning the future improvements of both the hardware and the software solutions. The data has been processed with a 3D imaging software specifically developed by the authors for the HOPE project. The preliminary results are encouraging for some scenarios whereas some others seem to be really demanding for the GPR sensor. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Ground penetrating radar (GPR) is a nondestructive measurement technique, which uses electromagnetic waves to locate targets or interfaces buried within a visually opaque substance or Earth material. GPR is also termed ground probing, surface penetrating (SPR), or subsurface radar. A GPR transmits a regular sequence of low-power packets of electromagnetic energy into the material or ground, and receives and detects the weak reflected signal from the buried target. The buried target can be a conductor, a dielectric, or combinations of both. There are now a number of commercially available equipments, and the technique is gradually developing in scope and capability. GPR has also been used successfully to provide forensic information in the course of criminal investigations, detect buried mines, survey roads, detect utilities, measure geophysical strata, and in other applications. ::: ::: ::: Keywords: ::: ::: ground penetrating radar; ::: ground probing radar; ::: surface penetrating radar; ::: subsurface radar; ::: electromagnetic waves <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Ground-penetrating radar (GPR) is a rapidly developing field that has seen tremendous progress over the past 15 years. The development of GPR spans aspects of geophysical science, technology, and a wide range of scientific and engineering applications. It is the breadth of applications that has made GPR such a valuable tool in the geophysical consulting and geotechnical engineering industries, has lead to its rapid development, and inspired new areas of research in academia. The topic of GPR has gone from not even being mentioned in geophysical texts ten years ago to being the focus of hundreds of research papers and special issues of journals dedicated to the topic. The explosion of primary literature devoted to GPR technology, theory and applications, has lead to a strong demand for an up-to-date synthesis and overview of this rapidly developing field. Because there are specifics in the utilization of GPR for different applications, a review of the current state of development of the applications along with the fundamental theory is required. This book will provide sufficient detail to allow both practitioners and newcomers to the area of GPR to use it as a handbook and primary research reference. *Review of GPR theory and applications by leaders in the field *Up-to-date information and references *Effective handbook and primary research reference for both experienced practitioners and newcomers <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Abstract We evaluate the reliability of the joint use of Ground Penetrating Radar (GPR) and Time Domain Reflectometry (TDR) to map dry snow depth, layering, and density where the snowpack thickness is highly irregular and the use of classical survey methods (i.e., hand probes and snow sampling) is unsustainable. We choose a test site characterised by irregular ground morphology, slope, and intense wind action (about 3000 m a.s.l., Western Alps, northern Italy) in dry snow conditions and with a snow-depth ranging from 0.3 m to 3 m over a few tens of metres over the course of a season. The combined use of TDR and high-frequency GPR (at a nominal frequency of 900 MHz) allows for rapid high-resolution imaging of the snowpack. While the GPR data show the interface between the snowpack and the ground, the snow layering, and the presence of snow crusts, the TDR survey allows the local calibration of wave speed based on GPR measurements and the estimation of layer densities. From January to April, there was a slight increase in the average wave speed from 0.22 to 0.24 m/ns from the accumulation zone to the eroded zone. The values are consistent with density values in the range of 350–450 kg/m 3 , with peaks of 600 kg/m 3 , as gravimetrically measured from samples from snow pits at different times. The conversion of the electromagnetic wave speed into density agrees with the core samples, with an estimated uncertainty of about 10%. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> F. Glaciology <s> Abstract We propose a methodology to estimate the density of frozen media (snow, firn and ice) using common offset (CO) GPR data. The technique is based on reflection amplitude analysis to calculate the series of reflection coefficients used to estimate the dielectric permittivity of each layer. We determine the vertical density variations for all the GPR traces by applying an empirical equation. We are thus able to infer the nature of frozen materials, from fresh snow to firn and ice. The proposed technique is critically evaluated and validated on synthetic data and further tested on real data of the Glacier of Mt. Canin (South-Eastern Alps). Despite the simplifying hypotheses and the necessary approximations, the average values of density for different levels are calculated with acceptable accuracy. The resulting large-scale density data are fundamental to estimate the water equivalent (WE), which is an essential parameter to determine the actual water mass within a certain frozen volume. Moreover, this analysis can help to find and locate debris or moraines embedded within the ice bodies. <s> BIB005
From literature studies, it is known how dry snow and ice are the geological media providing the best wave propagation performances for GPR pulses with frequencies above approximately to 1 MHz. Indeed, such investigated media show a very low attenuation rate (low conductivity) for these pulses and the absence of relaxation processes (ε r = 0). The penetration depth reaches nowadays the dimension of kilometers BIB003 . Besides, GPR is considered an effective tool for evaluating the glacier subsurface, due to the mostly horizontal and continuous configurations of their layers, which provide reflection patterns of easy interpretation BIB003 . With respect to the Italian territory, in 1993 almost 1400 glaciers were counted on the Alpine arch, for an overall interested area of about 608 km 2 . The Alpine glaciers are mostly classified as temperate glaciers, thereby involving a high seasonal variability in terms of snowmelt runoff, which in turn guarantees water provision in dry and warm seasons. Therefore, important research contributions in the field of glaciological applications of GPR can be found in literature. A good level of knowledge about the physical properties of glaciers, such as depth, density, and structural configuration, revealed to be helpful not only in public safety (e.g., avalanche prediction, see Section II-G.2), but also in environmental applications (e.g., climate change monitoring), energy supply (e.g., hydropower production) and in agricultural issues (availability of water sources for irrigation). As far as the density of the media is concerned, it is worth mentioning the study carried out by Godio , who has used different GPR systems with antenna central frequencies spanning from 500 to 1500 MHz on three different Alpine sites. The author employed the data collected to test the main theoretical relationships between the dielectric properties and the density of the dry snow. The results show a good predictive capability of GPR in mapping the vertical profile of the density for the dry snow, whereas the author argues that further work has to be done for detecting the micro-structure of the snow. In addition, Previati et al. BIB004 faced the same issue with similar purposes using a different approach. Indeed, a combined use of GPR and TDR was tested in this case to evaluate some physical properties of the snowpack. In the survey site, namely "Cime Bianche," close to the Ventina glacier, Italy, a pulsed GPR system with a central frequency of 900 MHz was employed together with a TDR, which was helpful in calibrating the radar measurements. The results showed an accurate assessment of the snow depth, whereas statistical and geostatistical analyses demonstrated the need for high-density data collection, which highlighted the low applicability of traditional methods. More recently, Forte et al. BIB005 have focused their efforts on a reflection amplitude analysis with the aim of recognizing the nature of the subsurface layers (snow, firm or ice) with GPR. The proposed method was developed on the basis of synthetic data, then tested on the field, over the Glacier of Mt. Canin (South-Eastern Alps), by employing a GPR system equipped with a 250 MHz shielded antenna. The authors assessed reliably the dielectrics of the layers, which were related to their densities. The disposition of the Ottawa Treaty furnished a concrete impulse to the research and the industrial activities for developing more effective technologies in landmine detection BIB002 . In such a framework, GPR has been found to be an effective tool in reducing the false alarm ratio (FAR) affecting the most seldom-employed devices, such as EM induction metal detectors (MDs). Therefore, the GPR technology can accomplish the crucial task of classifying the detected targets by interpreting their EM response, more than detecting them BIB003 . Italy plays an important role in such a scenario, also due to the presence of the Joint Common Research (JRC) in ISPRA, in the district of Varese, Northern Italy. Here, a test site for unexploded ordnance (UXO) detection was arranged for the validation of a handheld system developed in the context of the Handheld Operational Demining System (HOPE) project, promoted by the European Union in the late 90s. The HOPE system first brought a multitask approach for demining, involving EM and Magnetic sensors BIB001 .
GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Surface-penetrating radar is a nondestructive testing technique which uses electromagnetic waves to investigate the composition of nonconducting materials either when searching for buried objects or when measuring their internal structure. A typical surface-penetrating radar transmits a short pulse of electromagnetic energy of 1 ns (10/sup -9/ s) time duration from a transmit antenna into the material. Energy reflected from discontinuities in impedance is received by means of a receive antenna and is then suitably processed and displayed by a radar receiver and display unit. If the transmit and receive antennas are moved at a constant velocity along a linear path, a cross-sectional image of the material can be generated. Alternatively, if the antennas are scanned in a regular grid pattern, a three-dimensional image of the target can be derived. This paper provides a review of the principles of the technique, discusses the technical requirements for the individual subsystems comprising a surface-penetrating radar and provides examples of typical applications for the method. Continued technical improvements in system performance enable clearer radar images of the internal structure of materials to be obtained, thus advancing the application of the technique. <s> BIB001 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> This paper describes the data processing scheme for the Ground Penetrating Radar (GPR) array developed for the mullti-sensor mine detection system DEMAND. The GPR sensor make use of a densely sampled array able to supply data suitable for Three Dimensional (3D) focusing and for the evaluation of the full polarization matrix. A processing scheme based on a low threshold approach followed by feature extraction will be described. Such scheme uses 3D focalization, geometrical features and polarimetric ones to both maximize probability detection and reduce false alarm rate. The results will be illustrated using data from tests carried out in a realistic site in Sarajevo (Bosnia- Herzegovina) <s> BIB002 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> A through-wall imaging problem for a 2-D scalar geometry is addressed. It is cast as an inverse scattering problem and tackled under the linear Born model by means of the truncated singular value decomposition inversion scheme. A multiarray-based inversion strategy is considered. In particular, first the data collected by each single array are processed to obtain different tomographic images of the same scene under test. Then, the different images are suitably combined to obtain the overall image. The inversion scheme is tested for the challenging case of objects located within a complex environment resembling a room in a building. <s> BIB003 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Among the technologies used to improve landmine detection, Ground Penetrating Radar (GPR) techniques are being developed and tested jointly by “Sapienza” and “Roma Tre” Universities. Using three-dimensional Finite Difference Time Domain (FDTD) simulations, the electromagnetic field scattered by five different buried objects has been calculated and the solutions have been compared to the measurements obtained by a GPR system on a (1.3×3.5×0.5) m3 sandbox, located in the Humanitarian Demining Laboratory at Cisterna di Latina, to assess the reliability of the simulations. A combination of pre-calculated FDTD solutions and GPR scans, may make the detection process more accurate. <s> BIB004 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> Clearing large civilian areas from anti-personnel landmines and cluster munitions is a difficult problem. The FP7-funded research project ‘TIRAMISU: Toolbox Implementation for Removal of Anti-personnel Mines Submunitions and UXO’ aims to develop a global toolbox that will cover the main mine action activities, from the survey of large areas to the actual disposal of explosive hazards to mine risk education. For close-in detection a number of tools are being developed, including a new densely-sampled down-looking Ground Penetrating Radar array. It is a vehicle-based imaging array of air-launched antennas, endowed with realtime signal processing for the close-in detection (~0.4 m standoff) of landmines and UXOs buried within ~0.5 m deep soil layer. Automatic target detection capabilities and integration with a partner’s metal detector array onto a suitable autonomous vehicle will increase field data productivity and human safety. In particular, a novel antenna design has been studied to allow dense packing and stand-off operation while providing adequate penetration and resolution in almost all kind of terrains. Great effort is also being devoted to the development of effective signal processing algorithms suited for real-time implementation. This paper presents the general system architecture and the first experimental results from laboratory and in-house tests. <s> BIB005 </s> GPR Applications Across Engineering and Geosciences Disciplines in Italy: A Review <s> G. Demining and Public <s> The localization of people buried or trapped under snow or debris is an emerging field of application of ground penetrating radar (GPR). In the last years, technological solutions and processing approaches have been developed to improve detection accuracy, speed up localization, and reduce false alarms. As such, GPR can play an active role in cooperative approaches required to tackle such emergencies. In this work, we present and briefly analyze the evolution of research in this field of application of GPR technology. In doing so, we adopt a point of view that takes into account that avalanches and collapsed buildings are two scenarios that call for different GPR approaches, since the former can be tackled through image processing of radar data, while the latter rely on the detection of the Doppler frequency changes induced by physiological movements of survivors, such as breathing. <s> BIB006
On the basis of this contribution, several Italian works can be found in literature concerning the development of multisensor systems for detecting and characterizing landmines in humanitarian activities. Alli et al. BIB002 reported about the data processing approach for the GPR array involved in an integrated system called DEMAND, which has provided the use of one MD. The aim of the study was to reduce the FAR index of the MD device, by integrating the GPR data. To this purpose, a vehicle-mounted densely sampled ultra-wide-band (UWB) array was developed, which allowed to reconstruct a 3-D imaging of the subsurface and to focus the whole polarization matrix. The data processing scheme was worked out on the basis of the field tests performed in a test site in Sarajevo, Bosnia-Herzegovina, on a selection of antipersonnel and antitank mines. The authors were capable to provide a high level of characterization of the mine-like targets. As a result, the GPR application yielded a 30% reduction of the FAR related to the single use of MD. Balsi et al. BIB004 adopted an inverse approach, based on the results of FDTD simulations, for returning a more accurate detection of UXOs and landmines. GPR tests were performed by using a bistatic ground-coupled system with a 1 GHz central frequency antenna on a 1.3 m × 3.5 m × 0.5 m sand-filled box with several mine-like objects buried beneath its surface. Hence, the data collected were compared with those achieved by performing FDTD simulation through the gprMax software . The results highlighted good agreement between real and synthetic data, thereby proving the reliability of GPR in detecting buried metallic and nonmetallic targets. On the contrary, the authors noticed the need for a multisensor analysis to achieve an effective method for classifying the object detected. A more recent attempt to realize a hardware and software system capable of automatically detecting and recognizing unexploded ordnances, was conducted by Nuzzo et al. BIB005 . The authors presented the system architecture and the first results of laboratory and on-site tests relative to a densely sampled GPR array, within the research project TIRAMISU. The antenna array holds a multichannel configuration yielding an approximate width of survey of 1.3 m in a single pass, and a fast 3-D reconstruction of the surveyed volume. A real-time processing algorithm was developed and a number of tests was performed on canonical targets, such as metal pipes. At the current state of the study, encouraging results have been achieved for the first device prototype, although the integration of GPR and MD still needs to be improved. 2) Forensics and Public Safety: Police agencies or rescue operators frequently need to carry out surveys for locating bodies hidden or buried underneath surfaces in a quick and noninvasive manner. Common applications of GPR in this field are the location of graves, the recognition of human remains, and the marks of former excavations BIB001 . GPR can be also used for locating movements beyond the walls and detecting natural or artificial tunnels in the subsurface, for security and rescuing purposes. It is also well known that the Italian territory is characterized by widespread mountainous areas, whose hydrogeological history can be classified as particularly unstable. To have an idea of the dimension of this issue, it is worth mentioning that around the 70% of Italian municipalities is affected by landslide activities. This fact assumes a relevant importance if we consider that after the Second World War the Italian territory was interested by a wide urban and infrastructural expansion, even in unstable areas . Under a public safety perspective, this fact implies two main issues. First, in case of landslide, there may be a need for detecting people buried under debris in a very rapid time, with the highest possible accuracy. Second, since the Italian territory is characterized by several skiing resorts among the most frequented in Europe, the risk of persons buried by avalanches appears extremely serious. This pushes the authorities to seek always more rapid and effective technologies capable to locate bodies in time. In this framework, it is of evidence how GPR can allow to detect more effectively nonmetallic objects with respect to other NDTs that are typically sensitive to the magnetic field. A review about GPR applications on the detection of buried or trapped victims under snow or debris can be found in Crocco and Ferrara BIB006 . Another worthwhile Italian contribution in the forensic field is the multi-array tomographic approach proposed by Soldovieri et al. BIB003 for through-wall imaging (TWI) using GPR. TWI exploits EM waves at microwave frequencies for detecting hidden bodies. Such an application can be useful in both rescue and law enforcement or antiterrorism operations. The authors presented an approach for a multiarray configuration consisting in a combination of data collected by each single array. Both direct and inverse processes were run on synthetic data representing known targets hidden by a fixed geometry room. The approach showed good reliability in detecting and localizing hidden objects and their complex geometry.
Selective survey on spaces of closed subgroups of topological groups <s> S(G) <s> Throughout this paper, G is assumed to be a compact Lie group which acts as a topological transformation group on a space M. The symbol G, denotes the closed subgroup of G consisting of all elements of G which leave fixed the point p of M. It has been shown by Gleason that under certain conditions [1] there exists a local cross-section of the orbits at a point p. However a local cross-section does not always exist. In the case where G acts differentiably on a differentiable manifold, it is known that there exists at every point a somewhat more general object which might be called a slice [2, 5]. By using an invariant Riemannian metric a slice could be roughly described as a cell K orthogonal to G(p) at p, with G,(K) = K, and dim K the complementary dimension of G(p). In case p is fixed under G, K is merely an invariant neighborhood of p, so that in this case the definition has little content. We shall now define a slice without differentiability and then go on to show that it exists in the topological case for finite-dimensional spaces which may or may not be manifolds. The proof makes use of a recent theorem of Michael [3]. DEFINITION. If a compact Lie group G acts on a space M containing a point p then a slice at p is a closed set K which satisfies the following conditions: (1) p EK (2) Gp(K) = K (3) if y E K, then G, C G, and Gp(y) = K n G(y) (4) there is a compact cell R in G which is a local cross-section of the cosets of Gp in G at e and for which the map <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> S(G) <s> The Chabauty space of a topological group is the set of its closed subgroups, endowed with a natural topology. As soon as $n>2$, the Chabauty space of $R^n$ has a rather intricate topology and is not a manifold. By an investigation of its local structure, we fit it into a wider, but too wild, class of topological spaces (namely Goresky-MacPherson stratified spaces). Thanks to a localization theorem, this local study also leads to the main result of this article: the Chabauty space of $R^n$ is simply connected for all $n$. Last, we give an alternative proof of the Hubbard-Pourezza Theorem, which describes the Chabauty space of $R^2$. <s> BIB002
for compact G. The following two lemmas from are the basic technical tools in this area. The continuity is easy but to prove the openness we need Lemma 1.2. Let G be a compact group, X ∈ S(G). Then the following subsets from a base of neighbourhoods of X is S(G): where U is a neighbourhood of the identity of G, N is closed normal subgroup such that G/N is a Lie group, x 1 , . . . , x n are arbitrary elements of X, n ∈ N. In particular, if G is a compact Lie group then Lemma 1.2 states that there is a neighbourhood N of X such that each subgroup Y ∈ N is conjugated to some subgroup of X. The key part in the proof of Lemma 1.2 plays the Montgomery-Yang theorem on tubes BIB001 , see also [11, Theorem 5.4 from Chapter 2]. We recall that the cellularity (or Souslin number) c(X) of a topological space X is the supremum of cardinalities of disjoint families of open subsets of X. A topological space X is called dyadic if X is a continuous image of some Cantor cube {0, 1} κ . The weight w(X) of a topological space X is the minimal cardinality of open bases of X. Theorem 1.4 . For every compact group G, we have c( Theorem 1.5 . Let a group G be either profinite or compact and Abelian. If An Abelian group G is called Artinian if every increasing chain of subgroups of G is finite; every such group is isomorphic to the direct sum ⊕ p∈F C p ∞ ⊕ K, where F is a finite set of primes, K is a finite subgroup. An Abelian group G is called minimax if G has a finitely generated subgroup N such that G/N is Artinian. Theorem 1.7 . For a compact Abelian group G, the space S(G) has an isolated point if and only if the dual group G ∧ is minimax. for LCA G. The space S(R) is homeomorphic to the segment [0, 1]. By , S(R 2 ) is homeomorphic to the sphere S 4 . For n ≥ 3, S(R n ) is not a topological manifold and its structure is far from understanding, see BIB002 .
Selective survey on spaces of closed subgroups of topological groups <s> Theorem 1.8 [15]. The space S(G) of a LCA-group G is connected if and only if G has a subgroup topologically isomorphic to R. <s> This paper contains several results about the Chabauty space of a general locally compact abelian group. Notably, we determine its topological dimension, we characterize when it is totally disconnected or connected; we characterize isolated points. <s> BIB001
If F is a non-solvable finite group then S(R × F ) is not connected BIB001 Proposition 8.6 ]. Theorem 1.9 BIB001 .
Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> Topological groups with various systems of closed invariant subgroups are studied, and theorems generalizing Mal'cev's local theorems for Kuros-Cernikov class to locally compact groups are proved. As an application, information is obtained on systems of closed invariant subgroups of a locally compact, locally prosolvable or locally pronilpotent group. Problem 3.37 in the Kourovka notebook on the Frattini subgroup of a locally compact, locally pronilpotent group is also solved. A new topologization of the set of closed subsets of a topological space is applied to prove the local theorems. Bibliography: 9 titles. <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> This paper gives a negative solution to the problem of Milnor concerning the degrees of growth of groups. The construction also answers a question of Day concerning amenable groups. A number of other results are obtained on residually finite finitely generated infinite 2-groups. Bibliography: 51 titles. <s> BIB002 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> We investigate the isolated points in the space of finitely generated groups. We give a workable characterization of isolated groups and study their hereditary properties. Various examples of groups are shown to yield isolated groups. We also discuss a connection between isolated groups and solvability of the word problem. <s> BIB003 </s> Selective survey on spaces of closed subgroups of topological groups <s> From Chabauty to local method. <s> A group G is called hereditarily non-topologizable if, for every H⩽G, no quotient of H admits a non-discrete Hausdorff topology. We construct first examples of infinite hereditarily non-topologizable groups. This allows us to prove that c-compactness does not imply compactness for topological groups. We also answer several other open questions about c-compact groups asked by Dikranjan and Uspenskij. On the other hand, we suggest a method of constructing topologizable groups based on generic properties in the space of marked k-generated groups. As an application, we show that there exist non-discrete quasi-cyclic groups of finite exponent; this answers a question of Morris and Obraztsov. <s> BIB004
A topological group G is called topologically simple if each closed normal subgroup of G is either G or {e}. Every topologically simple LCA-group is discrete and either G = {e} or G is isomorphic to C p . Following the algebraic tradition, we say that a group G is locally nilpotent (solvable) if every finitely generated subgroup is nilpotent (solvable). In [18, Problem 1.76], V. Platonov asked whether there exists a non-Abelian topologically simple locally compact locally nilpotent group. Now we sketch the negative answer to this question for locally solvable group obtained in . Let G be a locally compact locally solvable group. We take g ∈ G \ {e}, choose a compact neighbourhood U of G and denote by F the family of all topologically finitely generated subgroups of G containing g. We may assume that G is not topologically finitely generated so F is directed by the inclusion ⊂. For each F ∈ F , we choose A F , B F ∈ S(F ) such that B F ⊂ A F , A F and B F are normal in F , A F ∩ U = ∅, B F ∩ U = ∅ and A F /B F is Abelian. Since S(G) is compact, we can choose two subnets (A α ) α∈I , (B α ) α∈I of the nets (A F ) F ∈F , (B F ) F ∈I which converges to A, B ∈ S(G). Then A, B are normal in G and A/B is Abelian. Moreover, x / ∈ B and A ∩ U = ∅. If A = {G} then A is a proper normal subgroup of G; otherwise G/B is Abelian. In BIB001 , the Chabauty topology was defined on some systems of closed subgroups of locally compact group G. A system A of closed subgroups of G is called subnormal if • A contains {e} and G; • A is linearly ordered by the inclusion ⊂; • for any subset M of A, the closure of F ∈M F ∈ A and F ∈M F ∈ A ; • whenever A and B comprise a jump in A (i.e B ⊂ A and no members of A lie between B and A), B is a normal subgroup of A. If the subgroup A, B form a jump then A/B is called a factor of G. The system is called normal if each A ∈ A is normal in G. A group G is called an RN-group if G has a normal system with Abelian factors. Among the local theorems from BIB001 , one can find the following: if every topologically finitely generated subgroup of a locally compact group G is an RN-group then G is an RN-group. In particular, every locally compact locally solvable group is an RN-group. In 1941, see [21, pp. 78-83] , A.I. Mal'tsev obtained local theorems for discrete groups as applications of the following general local theorem: if every finitely generated subsystem of an algebraic system A satisfies some property P, which can be defined by some quasi universal second order formula, then A satisfies P. In , Mal'tsev's local theorem was generalized on topological algebraic system. The part of the model-theoretical Compactness Theorem in Mal'tsev arguments plays some convergents of closed subsets. A net (F α ) α∈I of closed subsets of a topological space X S-converges to a closed subset F if • for every x ∈ F and every neighbourhood U of x, there exists β ∈ I such that F α ∩ U = ∅ for each α > β; • for every y ∈ X \ F , there exist a neighbourhood V of y and γ ∈ I such that F α ∩ V = ∅ for each α > γ. Every net of closed subsets of an arbitrary (!) topological space has a convergent subnet. If X is a Hausdorff locally compact space then S-convergence coincides with convergence in the Chabauty topology. 1.7 Spaces of marked groups. Let F k be the free group of rank k with the free generators x 1 , . . . , x k and let G k denotes the set of all normal subgroups of F k . In the metric form, the Chabauty topology on G k was introduced in BIB002 as a reply on the Gromov's idea of topologizations of some sets of groups . Let G be a group generated by g 1 , . . . , g k . The bejection x i −→ g i g 1 , . . . , g n can be extended to the homomorphism f : F k −→ G. With the correspondence G −→ ker f , G k is called the space marked k-generated groups. A couple of papers in development of BIB002 is directed to understand how large in topological sense are well-known classes of finitely generated groups, or how a given kgenerated group is placed in G k , see BIB003 . Among applications of G k , we mention the construction of topologizable Tarski Monsters in BIB004 .
Selective survey on spaces of closed subgroups of topological groups <s> Segment topologies. <s> Metric spaces Coarse spaces Growth and amenability Translation algebras Coarse algebraic topology Coarse negative curvature Limits of metric spaces Rigidity Asymptotic dimension Groupoids and coarse geometry Coarse embeddability Bibliography. <s> BIB001 </s> Selective survey on spaces of closed subgroups of topological groups <s> Segment topologies. <s> In this paper we define some ballean structure on the power set of a group and, in particular, we study the subballean with support the lattice of all its subgroups. If $G$ is a group, we denote by $L(G)$ the family of all subgroups of $G$. For two groups $G$ and $H$, we relate their algebraic structure via the ballean structure of $L(G)$ and $L(H)$. <s> BIB002
Let G be a topological group, P G is the family of all subsets of G, [G] <ω is the family of all finite subsets of G. Each pair A, B of subsets of P G closed under finite unions define the segment topology on L(G) with a base consisting of the segments. These topologies are studied in in the following three cases: • if U, V ∈ Σ(H) then U ∩ V contains some W ∈ Σ(H); • for every U ∈ Σ(H), there exists V ∈ Σ(H) such that U ∈ Σ(K) for each K ∈ L(G), K ⊆ V; • U ∈Σ(H) U = H for each H ∈ L(G). Then the family {X ∈ L(G) : X ⊆ U}, U ∈ Σ is a base for the Σ-topology on L(G). Let τ denotes the topology of G, P τ is the family of all subsets of τ . We assume that, for each H ∈ L(G), Θ(H) is some subset of P τ such that the following conditions are satisfied • for every α, β ∈ Θ(H), there is γ ∈ Θ(H) such that α < γ, β < γ (α < β means that, for every U ∈ α, there exists V ∈ β such that V ⊆ U); • for every α ∈ Θ(H), there exists β ∈ Θ(H) such that if K ∈ L(G) and K ∩ V = ∅ for each V ∈ β, then α < γ for some γ ∈ Θ(K); • for each H ∈ L(G) and every neighbourhood V of x, there exists α ∈ Θ(H) such that x ∈ U, U ⊆ V for some U ∈ α. Then the family {X ∈ L(G) : X ∩U = ∅ for each U ∈ α}, where α ∈ Θ(H), H ∈ L(G), is a base for the Θ-topology on L(G). The upper bound of Σ-and Θ-topologies is called the (Σ, Θ)-topology. A net (H α ) α∈I converges in (Σ, Θ)-topology to H ∈ L(G) if and only if • for any U ∈ Σ(H), there exists β ∈ I such that H α ⊆ U for each α > β; • for any α ∈ Θ(H), there exists γ ∈ I such that H α ∩ V = ∅ for each α > γ. In , one can find characterizations of G with compact and discrete L(G) in some concrete (Σ, Θ)-topologies. 3.6. Hyperballeans of groups. Let G be a discrete group. The set {F g : g ∈ G, F ∈ [G] <ω } is a family of balls in the finitary coarse structure on G. For coarse structures and balleans see BIB001 and [50] . The finitary coarse structure on G induces the coarse structure on L(G) in which {X ∈ L(G) : X ⊆ F A, A ∈ F X}, F ∈ [G] <ω is the family of balls centered at A ∈ L(G). The set L(G) endowed with structure is called a hyperballean of G. Hyperballeans of groups carefully studied in BIB002 can be considered as asymptotic counterparts of Bourbaki uniformities.
Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A new catalyst component and its use with an organoaluminum compound, which component is a brown solid of high surface area and large pore volume comprising beta titanium trichloride and a small amount of an organic electron pair donor compound. This solid when used in conjunction with an organoaluminum compound to polymerize alpha-olefins produces product polymer at substantially increased rates and yields compared to present commercial, purple titanium trichloride while coproducing reduced amounts of low-molecular-weight and, particularly, amorphous polymer. Combinations of this new catalyst component and an organoaluminum compound can be further improved in their catalytic properties by addition of small amounts of modifiers, alone and in combination. Such combinations with or without modifiers show good sensitivity to hydrogen used as a molecular weight controlling agent. The combinations are useful for slurry, bulk and vapor phase polymerization of alpha-olefins such as propylene. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> Introduction. The concept of isotopy, recently introduced(') by A. A. Albert in connection with the theory of linear non-associative algebras, appears to have its value in the theory of quasigroups. Conversely, the author has been able to use quasigroups(2) in the study of linear non-associative algebras. The present paper is primarily intended as an illustration of the usefulness of isotopy in quasigroup-theory and as groundwork for a later paper on algebras, but is bounded by neither of these aspects. The first two sections are devoted to the basic definitions of quasigroup and isotopy, along with some elementary remarks and two fundamental theorems due to Albert. Then there is initiated a study of special types of quasigroup, beginning with quasigroups with the inverse property (I. P. quasigroups). A system Q of elements a, b, * * * is called an I. P. quasigroup if it possesses a single-valued binary operation ab and there exist two one-to-one reversible mappings L and R of Q on itself such that <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A waterproof pressure sensitive adhesive laminate is provided in which a flexible plastics backing sheet is coated with a bituminous adhesive composition containing a minor proportion of rubber or thermoplastic polymer. The backing sheet is reinforced with a mesh or a woven or non-woven fabric which is embedded in the sheet and provides substantial resistance to stretching. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> A compression device for exerting pressure on an arm, shoulder, and/or trunk of a patient in need thereof (for example, a patient with hyperalgia or recovering from surgery in which the lymphatic system is affected), including an arm compression hose, a shoulder part for exerting pressure on the shoulder and trunk area, and a band-shaped fastening means for positioning the shoulder part and exerting pressure on the shoulder part. The arm compression hose exerts a pressure that decreases from a maximum pressure at the wrist or hand to a minimum pressure near the shoulder end of the arm, where the minimum pressure is approximately 70% of the maximum pressure. One or more lining pockets can be constructed on the inner lining of the compression device, where each lining pocket can hold one or more compression pads to increase tissue pressure in one or more body areas in need thereof. The compression pads each can have a shape that approximately conforms to the shape of the body part to which it is applied. The shoulder part can also have a shape that approximately conforms to the contour of the shoulder/trunk area to which it is applied. In addition, compression pants can be prepared with lining pockets for receiving compression pads. In one embodiment, compression pants include one or more donut-shaped pads or equivalents thereof that are placed in one or more lining pockets, each of which surrounds one or more osteoma openings. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> for all a, x in 21. I t is clear that associative algebras are alternative. The most famous examples of alternative algebras which are not associative are the so-called Cayley-Dickson algebras of order 8 over $. Let S be an algebra of order 2 over % which is either a separable quadratic field over 5 or the direct sum 5 ©3There is one automorphism z—>z of S (over %) which is not the identity automorphism. The associative algebra O = 3~\~S with elements <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The Basic Framework. The Structure of Groups. Lie Groups. Representation of Groups--Principal Ideas. Representation of Groups--Developments. Group Theory in Quantum Mechanical Calculations. Crystallographic Space Groups. The Role of Lie Algebras. The Relationships Between Lie Groups and Lie Algebras Explored. The Three-Dimensional Rotation Groups. The Structure of Semi-Simple Lie Algebras. Representations of Semi-Simple Lie Algebras. Symmetry Schemes for the Elementary Particles. Appendices. References. Subject Index. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> This paper deals with the origins and early history of loop theory, summarizing the period from the 1920s through the 1960s. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The aim of this paper is to offer an overview of the most important applications of Jordan structures inside mathematics and also to physics, up-dated references being included. For a more detailed treatment of this topic see - especially - the recent book Iordanescu [364w], where sugestions for further developments are given through many open problems, comments and remarks pointed out throughout the text. ::: Nowadays, mathematics becomes more and more nonassociative and my prediction is that in few years nonassociativity will govern mathematics and applied sciences. ::: Keywords: Jordan algebra, Jordan triple system, Jordan pair, JB-, JB*-, JBW-, JBW*-, JH*-algebra, Ricatti equation, Riemann space, symmetric space, R-space, octonion plane, projective plane, Barbilian space, Tzitzeica equation, quantum group, B\"acklund-Darboux transformation, Hopf algebra, Yang-Baxter equation, KP equation, Sato Grassmann manifold, genetic algebra, random quadratic form. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> In this paper, we study left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, we investigate relationships left primary and weakly left primary ideals in LA-rings. Finally, we obtain necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideals in LA- rings. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> The aim of this paper is to characterize left almost rings by congrunces. We show that each homomophism of left amost rings defines a congrucne relation on left almost rings. We then discuss quotient left almiost rings. At the end we prove analogues of the ismorphism theorem for left almost rings. <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Introduction <s> Thank you very much for downloading introduction to lie algebras and representation theory. As you may know, people have search numerous times for their favorite books like this introduction to lie algebras and representation theory, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some harmful virus inside their desktop computer. <s> BIB011
One of the endlessly alluring aspects of mathematics is that its thorniest paradoxes have a way of blooming into beautiful theories. Pure mathematics is, in its way the poetry of logical ideas. Today mathematics especially pure mathematics is not the same as it was hundred years ago. Many revolutions have occurred and it has taken new shapes with the due course of time. Until recently the theory of rings and algebras was regarded exclusively as the theory of associative rings and algebras. This was a result of the fact that the first rings encountered in the course of the development of mathematics were associative (and commutative) rings of numbers and rings of functions, and also associative rings of endomorphisms of abelian groups, in particular, rings of linear transformations of vector spaces. This survey of one part of the theory of rings: precisely, the theory of rings, which although non-associative, are more or less connected with the theory of associative rings. More precise connections will be mentioned during the discussion of particular classes of rings. A major change took place in the mid of 19 th century when the concept of non-associative rings and non-associative algebras were introduced. The theory of non-associative rings and algebras has evolved into an independent branch of algebra, exhibiting many points of contact with other fields of mathematics and also with physics, mechanics, biology, and other sciences. The central part of the theory is the theory of what are known as nearly-associative rings and algebras: Lie, alternative, Jordan, loop rings and algebras, and some of their generalizations. We briefly describe the origins of the theory of non-associative rings. The oldest nonassociative operation used by mankind was plain subtraction of natural numbers. The first ever example of a ring that is non-associative is Octonions, constructed by John T. Graves in 1843. On the other hand the first example of an abstract non-associative system was Cayley numbers, constructed by Arthur Cayley in 1845. Later they were generalized by Dickson to what we know as Cayley-Dickson algebras. Later in 1870 a very important non-associative class known as Lie Theory was introduced by the Norwegian mathematician Sophus Lie. He employed a novel approach, combining transformations that preserve a type of geometric structure (specifically, a contact structure) and group theory to arrive at a theory of continuous transformation groups . Since then, Lie Theory has been found to have many applications in different areas of mathematics, including the study of special functions, differential and algebraic geometry, number theory, group and ring theory, and topology BIB011 . It has also become instrumental in parts of physics, for some Lie algebras arise naturally from symmetries in physical systems, and is a powerful tool in such areas as quantum and classical and mechanics, , solid state physics, atomic spectroscopy and elementary particles BIB006 . No doubt Lie theory is a fundamental part of mathematics. The areas it touches contain classical, differential, and algebraic geometry, topology, ordinary and partial differential equations, complex analysis and etc. And it is also an essential chapter of contemporary mathematics. A development of it is the Uniformization Theorem for Riemann surface. The final proof of such theorem is the invention from Einstein to the special theory of relativity and the Lorentz transformation. The application of Lie theory is astonishing. Moreover, in 1890's the concept of hyperbolic quaternion was given by Alexander Macfarlane which forms a non-associative ring that suggested the mathematical footing for space time theory that followed later. Furthermore, to the best of our knowledge the first detailed discussion about Alternative rings was started in 1930 by the German author Zorn BIB001 . For more study about this nonassociative structure we refer the readers to study BIB004 BIB005 . Another important class of non-associative structures was introduced in 1932-1933 by German specialist Pasqual Jordan in his algebraic formulation of quantum mechanics. Jordan structures also appear in quantum group theory, and exceptional Jordan algebras play an important role in recent fundamental physical theories, namely, in the theory of super-strings BIB008 . The systematic study of general Jordan algebras was started by Albert in 1946 . In addition, the study of loops started in 1920's and these were introduced formally first time in 1930's BIB007 . The theory of loops has its roots in geometry, algebra and combinatorics. This can be found in nonassociative products in algebra, in combinatorics it is presented in latin squares of particular form and in geometry it has connection with the analysis of web structures . A detailed study of theory of the loops can be found in [3, 4, BIB002 . Historically, the concept of a non-associative loop ring was introduced in a paper by Bruck in 1944 BIB003 . Non-associative loop rings appear to have been little more than a curiosity until the 1980s when the author found a class of non-associative Moufang loops whose loop rings satisfy the alternative laws. After the concept of loop rings (1944), a new class of non-associative ring theory was given by Yusuf in 2006 . Although the concept of LA-ring was given in 2006, but the systematic study and further developments was started in 2010 by Shah and Rehman in their paper . It is worth mentioning that this new class of non-associative rings named Left almost rings (LA-ring) is introduced after a huge gap of 6 decades since the introduction of loop rings. Left almost rings (LA-ring) is actually an off shoot of LA-semigroup and LA-group. It is a noncommutative and non-associative structure and gradually due to its peculiar characteristics it has been emerging as useful non-associative class which intuitively would have reasonable contribution to enhance non-associative ring theory. By an LA-ring, we mean a non-empty set R with at least two elements such that (R, +) is an LA-group, (R, .) is an LA-semigroup, both left and right distributive laws hold. In , the authors have discussed LA-ring of finitely nonzero functions which is in fact a generalization of a commutative semigroup ring. On the way the first ever definition of LA-module over an LA-ring was given by Shah and Rehman in the same paper . Moreover, Shah and Rehman discussed some properties of LArings through their ideals and intuitively ideal theory would be a gate way for investigating the application of fuzzy sets, intuitionistics fuzzy sets and soft sets in LA-rings. For example, Shah et al., have applied the concept of intuitionistic fuzzy sets and established some useful results. In some computational work through Mace4 has been done and some interesting characteristics of LA-rings have been explored. Further Shah et al., have promoted the concept of LA-module and established some results of isomorphism theorems and direct sum of LA-modules. Recently, in 2014, Alghamdi and Sahraoui have defined and constructed a tensor product of LA-modules and they extended some simple results from the ordinary tensor to the new setting. In 2014, Yiarayong BIB009 have given the new concept of left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, in 2015 Hussain and Khan BIB010 have characterized LA-rings by congruence relations. They proved that each homomorphism of left almost rings defines a congruence relation on left almost rings. For some more study of LA-rings, we refer the readers to see .
Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> We rephrase the classical theory of composition algebras over fields, particularly the Cayley-Dickson Doubling Process and Zorn's Vector Matrices, in the setting of locally ringed spaces. Fixing an arbitrary base field, we use these constructions to classify composition algebras over (complete smooth) curves of genus zero. Applications are given to composition algebras over function fields of genus zero and polynomial rings. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> I. Underpinnings. II. Division Algebra Alone. III. Tensor Algebras. IV. Connecting to Physics. V. Spontaneous Symmetry Breaking. VI. 10 Dimensions. VII. Doorways. VIII. Corridors. Appendices. Bibliography. Index. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> 1. Introduction 2. Non-associative algebras 3. Hurwitz theorems and octonions 4. Para-Hurwitz and pseudo-octonion algebras 5. Real division algebras and Clifford algebra 6. Clebsch-Gordon algebras 7. Algebra of physical observables 8. Triple products and ternary systems 9. Non-associative gauge theory 10. Concluding remarks. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> Oil from subsurface tar sand having an injection means in fluid communication with a production means is recovered by injecting a water-external micellar dispersion at a temperature above 100 DEG F., into the tar sands, displacing it toward the production means and recovering the oil through the production means. The micellar dispersion can be preceded by a slug of hot water which can optionally have a pH greater than about 7. Also, the micellar dispersion can have a pH of about 7-14 and preferably a temperature greater than about 150 DEG F. The micellar dispersion contains hydrocarbon, surfactant, aqueous medium, and optionally cosurfactant and/or electrolyte. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> This book investigates the geometry of quaternion and octonion algebras. Following a comprehensive historical introduction, the book illuminates the special properties of 3- and 4-dimensional Euclidean spaces using quaternions, leading to enumerations of the corresponding finite groups of symmetries. The second half of the book discusses the less familiar octonion algebra, concentrating on its remarkable "triality symmetry" after an appropriate study of Moufang loops. The authors also describe the arithmetics of the quaternions and octonions. The book concludes with a new theory of octonion factorization. Topics covered include the geometry of complex numbers, quaternions and 3-dimensional groups, quaternions and 4-dimensional groups, Hurwitz integral quaternions, composition algebras, Moufang loops, octonions and 8-dimensional geometry, integral octonions, and the octonion projective plane. <s> BIB005
In order to solidify the non-associative ring theory, the origin of the non-associative ring could be traced to the work of John T. Graves who discovered Octonions in 1843, which is considered to be the first ever example of non-associative ring. It is an 8-dimensional algebra over R which is non-associative as well as being non-commutative. These were rediscovered by cayley in 1845 and are also known sometimes as the cayley numbers. Each nonzero element of octonion still has an inverse so that it is a division ring, albeit a non-associative one. For a most comprehensive account of the octonions see [9] . The process of going from R to C, from C to H, and from H to O, is in each case a kind of doubling process. At each stage something is lost from R to C it loosed the property that R is ordered, from C to H loosed commutativity and from H to O loosed associativity. This process has been generalized to algebras over fields and indeed over rings. It is called Dickson doubling or Cayley-Dickson Doubling see BIB005 BIB001 . If we apply the Cayley-Dickson doubling process to the octonions we obtain a structure called the sedenions, which is a 16-dimensional non-associative algebra. In physics community much work is currently focused on octonion models see BIB002 BIB003 BIB004 . Historically speaking, the inventors or discoverers of the quaternions, octonions and related algebras (Hamilton, Cayley, Graves, Grassmann, Jordan, Clifford and others) were working from a physical point-of-view and wanted their abstractions to be helpful in solving natural problems .
Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The object of this paper is to give a new proof of the theorem that every Lie algebra over a field K of characteristic zero, has a faithful representation. The first proof of this result, at least when K is algebraically closed, is due to Ado (1). Later Cartan (2) gave a simpler and entirely different proof for the case when K is the field of either real or complex numbers. Cartan's proof depends on the integration of the Maurer-Cartan equations and therefore is of a non-algebraic character.! The present proof is of course algebraic and seems to differ from the earlier ones in approaching the problem quite directly. Also the result established is slightly sharper than the usual one in so far as we assert the existence of a faithful representation in which every element of the maximal nilpotent ideal of the given Lie algebra is mapped on a nilpotent matrix. I am very much indebted to Professor C. Chevalley for his advice and help in improving the presentation of the proof. Also I should like to thank Dr. G. D. Mostow for many interesting and valuable discussions. All algebras (whether Lie algebras or associative algebras) and vector spaces appearing in this paper are to be understood over the basic field K. A linear Lie algebra 2 is a Lie algebra whose elements are endomorphisms of some given vector space, the bracket operation in 2 being defined by [X,Y] = XY YX. As far as possible we follow the notation and terminology of Chevalley's book (3) and his papers (4). In particular, if 2 is a Lie algebra and X e 2 we denote by ad X the derivation of 2 defined by (ad X)Y = [XY](Y ).2 The following notion of the semidirect sum of a Lie algebra and its algebra of derivations' is important for our purpose. DEFINITION. Let S be a Lie algebra and i) the algebra of its derivations. By the semidirect sum of V and Z is meant a Lie algebra 2 + ) defined as follows. Considered as a vector space 2 + Z is the direct sum of 2 and Z so that an element of 2 + i) is a pair (X, D) with X e and D e Z. The bracket operation in S + Z is defined by <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let L be a Lie ring and denote the product of x and y in L by [ x , y ]. The ring L is said to satisfy the Engel condition (cf. (1)), if for every pair of elements x, y e L there is an integer k = k ( x , y )such that If k ( x , y ) can be taken equal to a fixed integer n for all x , y e L then L is said to satisfy the n-th Engel condition . <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Given any associative ring A we can form, using its operations and its elements, two new rings. These use the elements of A and the addition as defined in A, but new multiplications are introduced to render them rings, albeit not necessarily associative rings. The first of these, the Lie ring AL of A uses a multiplication defined by [a, b] =ab ba for any a, b e A where ab is the ordinary associative product of elements in A. The second of these, the Jordan ring of A, A', has its multiplication defined by ao bab + ba for any pair of elements a, b in A. Being defined in a manner so decidedly dependent on the associative product of A, it is natural to expect that an intimate relationship should exist between the structure of these two new rings and that of A. In this paper we study one phase of this relationship, namely the connection between the ideal structure of A as an associative ring with the ideal structure of AL and A' as Lie and Jordan rings respectively. To be more specific, we investigate how simplicity of A as an associative ring reflects into analogous properties of AL and A,. When we say that U is an ideal of Al, or, equivalently, when we say that U is a Jordan ideal of A, we mean that U is an additive subgroup of A and that for any xeU and any yeA, xoy=xy+yx is an element of U. We similarly define Lie ideals of A and ideals of AL. Although the main results of this paper deal with the case in which A is a simple ring, many of the other results do not require the assumption of simplicity in order to remain valid; so, unless otherwise stated, we make no assumption of simplicity for A. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Introduces the concepts and methods of the Lie theory in a form accesible to the non-specialist by keeping the mathematical prerequisites to a minimum. The book is directed towards the reader seeking a broad view of the subject rather than elaborate information about technical details. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this paper the Lie structure of prime rings of characteristic 2 is discussed. Results on Lie ideals are obtained. These results are then applied to the group of units of the ring, and also to Lie ideals of the symmetric elements when the ring has an involution. This work extends recent results of I. N. Herstein, C. Lanski and T. S. Erickson on prime rings whose characteristic is not 2, and results of S. Montgomery on simple rings of characteristi c 2. 1* Prime rings* We first extend the results of Herstein [5]. Unless otherwise specified, all rings will be associative. If R is a ring, R has a Lie structure given by the product [x, y] = xy — yx, for x,yeR. A Lie ideal of R is any additive subgroup U of R with [u, r]e U for all u e U and reR. By a commutative Lie ideal we mean a Lie ideal which generates a commutative subring of R. Denote the center of R by Z. We recall that if R is prime, then the nonzero elements of Z are not zero divisors in R. In this case, if Z Φ 0 and F is the quotient field of R, then R ®ZF is a prime ring, every element of which can be written in the form r (x) α"1 for ae Z, a Φ 0. Thus R®ZF is naturally isomorphic to RZ~~\ the localization of R at Z. We will consider R imbedded in RZ~ι in the usual way (see [2]). We begin with some easy lemmas. LEMMA 1. If R is semi prime and U is a Lie ideal of R with u2 = 0 for all ue U, then U = 0. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let $R$ be an associative ring with centre $Z$. The aim of this paper is to study how the ideal structure of the Lie ring of derivations of $R$, denoted $D(R)$, is determined by the ideal structure of $R$. If $R$ is a simple (respectively semisimple) finite-dimensional $Z$-algebra and δ$(z)$ = 0 for all δ ∈ $D(R)$, then every derivation of $R$ is inner and $D(R)$ is known to be a simple (respectively semisimple) Lie algebra (see [7, 5]). Here we are interested in extending these results to the case where $R$ is a prime or semi-prime ring. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In [2] we proved that ifG is a finite group containing an involution whose centralizer has order bounded by some numberm, thenG contains a nilpotent subgroup of class at most two and index bounded in terms ofm. One of the steps in the proof of that result was to show that ifG is soluble, then ¦G/F(G) ¦ is bounded by a function ofm, where F (G) is the Fitting subgroup ofG. We now show that, in this part of the argument, the involution can be replaced by an arbitrary element of prime order. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Abstract It is proved that if a locally nilpotent group G admits an almost regular automorphism of prime order p then G contains a nilpotent subgroup G 1 such that | G : G 1 |≤ƒ( p , m ) and the class of nilpotency of G 1 ƒ g ( p ), where ƒ is a function on p and the number of fixed elements m and g depends on p only. An analog is proved for Lie rings (not necessarily locally nilpotent). These give an affirmative answer to the questions raised by Khukhro. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We consider locally nilpotent periodic groups admitting an almost regular automorphism of order 4. The following are results are proved: (1) If a locally nilpotent periodic group G admits an automorphism ϕ of order 4 having exactly m<∞ fixed points, then (a) the subgroup {ie176-1} contains a subgroup of m-bounded index in {ie176-2} which is nilpotent of m-bounded class, and (b) the group G contains a subgroup V of m-bounded index such that the subgroup {ie176-3} is nilpotent of m-bounded class (Theorem 1); (2) If a locally nilpotent periodic group G admits an automorphism ϕ of order 4 having exactly m<∞ fixed points, then it contains a subgroup V of m-bounded index such that, for some m-bounded number f(m), the subgroup {ie176-4}, generated by all f(m) th powers of elements in {ie176-5} is nilpotent of class ≤3 (Theorem 2). <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Abstract In this paper we prove that there are functions f ( p , m , n ) and h ( m ) such that any finite p -group with an automorphism of order p n , whose centralizer has p m points, has a subgroup of derived length ⩽ h ( m ) and index ⩽ f ( p , m , n ). This result gives a positive answer to a problem raised by E. I. Khukhro (see also Problem 14.96 from the “Kourovka Notebook” (1999, E. I. Khukhro and V. D. Mazurov (Eds.), “The Kourovka Notebook: Unsolved Problems in Group Theory,” 14th ed., Novosibirsk)). <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Lie algebra is an area of mathematics that is largely used by electrical engineer students, mainly at post-graduation level in the control area. The purpose of this paper is to illustrate the use of Lie algebra to control nonlinear systems, essentially in the framework of mobile robot control. The study of path following control of a mobile robot using an input-output feedback linearization controller is performed. The effectiveness of the nonlinear controller is illustrated with simulation examples. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let L be a Lie ring or a Lie algebra of arbitrary, not necessarily finite, dimension. Let φ be an automorphism of L and let CL(φ) = {a ∈ L | φ(a) = a} denote the fixed-point subring. The automorphism φ is called regular if CL(φ)= 0, that is, φ has no non-trivial fixed points. By Kreknin’s theorem [20] if a Lie ring L admits a regular automorphism φ of finite order k, that is, such that φ = 1 and CL(φ)= 0, then L is soluble of derived length bounded by a function of k, actually, by 2 − 2. (Earlier Borel and Mostow [3] proved the solubility in the finite-dimensional case, without a bound for the derived length.) In the present paper we prove that if a Lie ring admits an automorphism of prime-power order that is “almost regular,” then L is “almost soluble.” <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The well-known theorem of Borel–Mostow–Kreknin on solubility of Lie algebras with regular automorphisms is generalized to the case of almost regular automorphisms. It is proved that if a Lie algebra L admits an automorphism ϕ of finite order n with finite-dimensional fixed-point subalgebra of dimension dimCL(ϕ)=m, then L has a soluble ideal of derived length bounded by a function of n whose codimension is bounded by a function of m and n (Theorem 1). A virtually equivalent formulation is in terms of a (Z/nZ)-graded Lie algebra L whose zero component L0 has finite dimension m. The functions of n and of m and n in Theorem 1 can be given explicit upper estimates. The proof is of combinatorial nature and uses the criterion for solubility of Lie rings with an automorphism obtained in [E.I. Khukhro, Siberian Math. J. 42 (2001) 996–1000]. The method of generalized, or graded, centralizers is developed, which was originally created in [E.I. Khukhro, Math. USSR Sbornik 71 (1992) 51–63] for almost regular automorphisms of prime order. As a corollary we prove a result analogous to Theorem 1 on locally nilpotent torsion-free groups admitting an automorphism of finite order with the fixed points subgroup of finite rank (Theorem 3). We also prove an analogous result for Lie rings with an automorphism of finite order having finitely many fixed points (Theorem 2). <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Isomorphisms between finitary unitriangular groups and those of associated Lie rings are studied. In this paper we investigate exceptional cases. <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We improve the conclusion in Khukhro's theorem stating that a Lie ring (algebra) L admitting an automorphism of prime order p with finitely many m fixed points (with finite-dimensional fixed-point subalgebra of dimension m) has a subring (subalgebra) H of nilpotency class bounded by a function of p such that the index of the additive subgroup |L: H| (the codimension of H) is bounded by a function of m and p. We prove that there exists an ideal, rather than merely a subring (subalgebra), of nilpotency class bounded in terms of p and of index (codimension) bounded in terms of m and p. The proof is based on the method of generalized, or graded, centralizers which was originally suggested in [E. I. Khukhro, Math. USSR Sbornik 71 (1992) 51–63]. An important precursor is a joint theorem of the author and E. I. Khukhro on almost solubility of Lie rings (algebras) with almost regular automorphisms of finite order. <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> A classical nilpotency result considers finite p-groups whose proper subgroups all have class bounded by a fixed number n. We consider the analogous property in nilpotent Lie algebras. In particular, we investigate whether this condition puts a bound on the class of the Lie algebra. Some p-group results and proofs carry over directly to the Lie algebra case, some carry over with modified proofs and some fail. For the final of these cases, a certain metabelian Lie algebra is constructed to show a case when the p-groups and Lie algebra cases differ. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> V.M. Kurochkin [1] has formulated the following theorem: Every Σ-operator Lie ring L has a faithful representation in an associative Σ-operator ring A, where Σ is an arbitrary domain of operators for the ring L. In a subsequent note [2], V.M. Kurochkin pointed out the insufficient rigor of the proof he proposed for this theorem. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this chapter we shall make a study of rings satisfying certain ascending chain conditions. In the non-commutative case-and this is really the only case with which we shall be concerned- the decisive and incisive results are three theorems due to Goldie. The main part of the chapter will be taken up with a presentation of these. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this paper, we study Lie and Jordan structures in simple Γ-rings of characteristic not equal to two. Some properties of these Γ-rings are developed. <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The object of this paper is to study Lie structure in simple gamma rings. We obtain some structural results of simple gamma rings with Lie ideals. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ with kernel $F$ and complement $H$ such that the fixed-point subgroup of $F$ is trivial: $C_G(F)=1$. In this situation various properties of $G$ are shown to be close to the corresponding properties of $C_G(H)$. By using Clifford's theorem it is proved that the order $|G|$ is bounded in terms of $|H|$ and $|C_G(H)|$, the rank of $G$ is bounded in terms of $|H|$ and the rank of $C_G(H)$, and that $G$ is nilpotent if $C_G(H)$ is nilpotent. Lie ring methods are used for bounding the exponent and the nilpotency class of $G$ in the case of metacyclic $FH$. The exponent of $G$ is bounded in terms of $|FH|$ and the exponent of $C_G(H)$ by using Lazard's Lie algebra associated with the Jennings--Zassenhaus filtration and its connection with powerful subgroups. The nilpotency class of $G$ is bounded in terms of $|H|$ and the nilpotency class of $C_G(H)$ by considering Lie rings with a finite cyclic grading satisfying a certain `selective nilpotency' condition. The latter technique also yields similar results bounding the nilpotency class of Lie rings and algebras with a metacyclic Frobenius group of automorphisms, with corollaries for connected Lie groups and torsion-free locally nilpotent groups with such groups of automorphisms. Examples show that such nilpotency results are no longer true for non-metacyclic Frobenius groups of automorphisms. <s> BIB021 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Suppose that a finite group $G$ admits a Frobenius group of automorphisms FH of coprime order with cyclic kernel F and complement H such that the fixed point subgroup $C_G(H)$ of the complement is nilpotent of class $c$. It is proved that $G$ has a nilpotent characteristic subgroup of index bounded in terms of $c$, $|C_G(F)|$, and $|FH|$ whose nilpotency class is bounded in terms of $c$ and $|H|$ only. This generalizes the previous theorem of the authors and P. Shumyatsky, where for the case of $C_G(F)=1$ the whole group was proved to be nilpotent of $(c,|H|)$-bounded class. Examples show that the condition of $F$ being cyclic is essential. B. Hartley's theorem based on the classification provides reduction to soluble groups. Then representation theory arguments are used to bound the index of the Fitting subgroup. Lie ring methods are used for nilpotent groups. A similar theorem on Lie rings with a metacyclic Frobenius group of automorphisms $FH$ is also proved. <s> BIB022 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We exhibit an explicit construction for the second cohomology group $H^2(L, A)$ for a Lie ring $L$ and a trivial $L$-module $A$. We show how the elements of $H^2(L, A)$ correspond one-to-one to the equivalence classes of central extensions of $L$ by $A$, where $A$ now is considered as an abelian Lie ring. For a finite Lie ring $L$ we also show that $H^2(L, \C^*) \cong M(L)$, where $M(L)$ denotes the Schur multiplier of $L$. These results match precisely the analogue situation in group theory. <s> BIB023 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We generalize the common notion of descending and ascending central series. The descending approach determines a naturally graded Lie ring and the ascending version determines a graded module for this ring. We also link derivations of these rings to the automorphisms of a group. This uncovers new structure in 4/5 of the approximately 11.8 million groups of size at most 1000 and beyond that point pertains to at least a positive logarithmic proportion of all finite groups. <s> BIB024 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Thank you very much for downloading introduction to lie algebras and representation theory. As you may know, people have search numerous times for their favorite books like this introduction to lie algebras and representation theory, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some harmful virus inside their desktop computer. <s> BIB025
In 1870 a very important non-associative class known as Lie Theory was introduced by the Norwegian mathematician Sophus Lie. The theory of Lie algebras is an area of mathematics in which we can see a harmonious between the methods of classical analysis and modern algebra. This theory, a direct outgrowth of a central problem in the calculus, has today become a synthesis of many separate disciplines, each of which has left its own mark. The importance of Lie algebras for applied mathematics and for applied physics has also become increasingly evident in recent years. In applied mathematics, Lie theory remains a powerful tool for studying differential equations, special functions and perturbation theory. Lie theory finds applications not only in elementary particle physics and nuclear physics, but also in such diverse fields as continuum mechanics, solid-state physics, cosmology and control theory. Lie algebra is also used by electrical engineers, mainly in the mobile robot control. For the basic information of Lie algebras, the readers are referred to BIB004 BIB011 BIB025 . It is well known that Lie algebra can be viewed as a Lie ring. So, the theory of Lie ring can be used in the theory of Lie algebra. A Lie ring is defined as a non-associative ring with multiplication that is anti-commutative and satisfies the Jacobi identity i.e.[a, [b, c] Although the Lie theory was introduced in 1870 but the major developments were made in the 20th century with the paper of Hausdorff in 1906. In (1934-35), Ado proved that any finite dimensional Lie algebra over the field of complex numbers can be represented in a finite dimensional associative algebra. Moreover, in 1937, Birkhoff and Witt independently examined that every Lie algebra is isomorphic to sub-algebra of some algebra of the form A (−) , where A (−) is a Lie ring defined by x.y = xy − yx. They also found a formula for computing the rank of the homogeneous modules in a free Lie algebra on a finite number of generators. Also in 1937, Magnus proved that the elements yi = 1 + xi of the ring H generate a free subgroup G of the multiplicative group of the ring H, and every element of the subgroup G n (the n-th commutator subgroup) has the form 1 + ln + w, where ln is some homogeneous Lie polynomial (with respect to the operations x.y and x + y of degree n in the generators ai, and w is a formal power series in which all the terms have degree greater than n. In 1947, Dynkin gave the criteria to determine whether the given polynomial is a Lie polynomial. Later in , Harish Chandra BIB001 and Iwasawa proved that Ado's theorem holds for any finite dimensional Lie algebra. Moreover, an important role in the theory of Lie rings is played by free Lie rings. In contrast to free alternative rings and free J-rings (free Jordan-rings), free Lie rings have been thoroughly studied. In that context, in 1950, Hall pointed out a method for constructing a basis of a free Lie algebra. In addition, analogous theorems about embedding of arbitrary algebras and of associative rings were proved respectively by Zhukov in 1950 and by Malcev in 1952. In (1953-54), Lazard and Witt [261] studied representations of -operator Lie rings in -operator associative rings. The existence of such a representation was proved by them in the case of -principal ideal rings and in particular for Lie rings without operators. The example constructed by Shirshov in BIB017 shows that there exist non-representable -operator Lie rings which do not have elements of finite order in the additive group. Also in 1954, Higgins investigated that solvable rings satisfying the n-th Engel condition are nilpotent and in continuation, Lazard studied nilpotent groups using large parts of the apparatus of Lie ring theory. In 1955, Cohn BIB002 constructed an example of a solvable Lie ring, with additive p-group (in characteristic p) and satisfying the p-th Engel condition, which is not nilpotent. Lie rings with a finite number of generators and some restrictions on the additive group. Also in 1955, Malcev [175] considered a class of binary-Lie rings, which are related to lie rings in a way analogous to the way alternative rings are related to associative rings. In (1955-56), Herstein BIB003 discussed associative rings which are dedicated to studying the rings A (−) with different assumptions on the ring A. In 1956, Witt proved that any sub-algebra of a free Lie algebra is again free. This theorem is analogous to the theorem of Kurosh for sub-algebras of free algebras. In the year 1957, many authors work on Lie algebra. For example, Higman proved nilpotency of any Lie ring which has an automorphism of prime order without nonzero fixed points. This statement allowed him to prove nilpotency of finite solvable groups which have an automorphism satisfying the analogous condition. Gainov , investigated that in the case of a ring for which the additive group has no elements of order two, for a ring to be binary-Lie it is sufficient that these identities hold: In (1957-58), Kostrikin proved that the Engel condition implies nilpotency. This result is especially interesting because from it follows the positive solution of the group-theoretical restricted Burnside problem for p-groups with elements of prime order . Herstein and Kleinfeld examined that if a Lie ring L admits a regular automorphism φ of finite order k, that is, such that φk = 1 and CL(φ) = 0, then L is soluble of derived length bounded by a function of k, actually, by 2k − 2. He also discussed the bounded solubility of a Lie ring with a fixed-point-free automorphism, but the existing Lie ring methods cannot be used for bounding the derived length in general. Moreover, Kreknin and Kostrikin in 1963 suggested that a Lie ring with a fixed-point-free automorphism of prime order p is nilpotent of p-bounded class. In continuation Kreknin and Kostrikin also investigated that a Lie ring (algebra) admitting a regular (i.e., without nontrivial fixed points) automorphism of prime order p is nilpotent of class bounded by a function h(p) depending only on p. Kreknin in 1967 projected that a Lie ring (algebra) admitting a regular automorphism of finite order n is soluble of derived length bounded by a function of n. In 1969, Herstein BIB018 focused his study on the structures of the Jordan and Lie rings of simple associative rings. In the latter case the approach is via the study of the structure of I(R), the Lie ring of inner derivations of R, or, equivalently, the Lie structure of R/Z. In 1970, Herstein studied lie structure of associative rings and proved some important results regarding lie structure of R/Z. In 1972, Lanski and Montgomery BIB005 studied Lie structure of prime rings of characteristic 2. Results on Lie ideals were obtained. These results were then applied to the group of units of the ring, and also to Lie ideals of the symmetric elements when the ring has an involution. This work extends recent results of Herstein, Lanski and Erickson on prime rings whose characteristic is not 2, and results of S. Montgomery on simple rings of characteristic 2. In 1974, Kawamoto discussed prime and semiprime ideals of Lie rings and showed that in a Lie algebra satisfying the maximal condition for ideals, any semi-prime ideal is an intersection of finite number of prime ideals and the unique maximal solvable ideal is equal to the intersection of all prime ideals. Jordan et al. BIB006 in 1978, studied that how the ideal structure of the Lie ring of derivations of R, is determined by the ideal structure of R. Moreover, the authors were interested in extending these results to the case where R is a prime or semi-prime ring. Hartley et al., BIB007 , in 1981 and Khukhro in 1986 proposed that the results on Lie rings with regular or almost regular automorphisms of prime order have consequences for nilpotent (or even finite, or residually locally nilpotent-by-finite, etc.) groups with such automorphisms. In 1992, Khukhro has generalized the work of Kreknin and Kostrikin on regular automorphisms; (almost) regularity of an automorphism of prime order implied (almost) nilpotency of the Lie ring (algebra), with corresponding bounds for the nilpotency class and the index (co-dimension). He also showed that a Lie ring (algebra) L admitting an automorphism φ of prime order p with finite fixed-point sub-ring of order m (with finite-dimensional fixedpoint sub-algebra of dimension m) has a nilpotent sub-ring (sub-algebra) K of class bounded by a function of p with the index of the additive subgroup |L : K| (the co-dimension of K) bounded by a function of m and p. Moreover, Khukhro proved that if a periodic (locally) nilpotent group G admits an automorphism φ of prime order p with m = |CG(φ)| fixed points then G has a nilpotent subgroup of (m, p)-bounded index and of p-bounded class and on the way this group result was also based on a similar theorem on Lie rings. The result given in , was later extended by Medvedev BIB008 in 1994 to not necessarily periodic locally nilpotent groups. In 1996 and in 1998, the authors BIB009 developed a method of graded centralizers given in to study the almost fixed-point-free automorphisms of Lie rings and nilpotent groups. Medvedev in 1999 Zapirain BIB010 in 2000 and Makarenko in 2001 established the most successful case regarding the nilpotent (or finite) p-groups with an almost regular automorphism of order p n , where theorems on regular automorphisms of Lie rings were used. Great progress has been made to date in Lie rings (algebras) with almost regular automorphisms. The history of this area of research started with the classical theorem of Kreknin. In 2003, Khukhro and Makarenko BIB012 proved that if a Lie ring admits an automorphism of prime-power order that is almost regular then L is almost soluble. Moreover, in 2003 and in 2004 Makarenko and Khukhro BIB013 , have succeeded in investigating the most general case of a Lie ring (algebra) with almost regular automorphism of arbitrary finite order. Makarenko and Khukhro BIB013 in 2004 analyzed that almost solubility of Lie rings and algebras admitting an almost regular automorphism of finite order, with bounds for the derived length and co-dimension of a soluble sub-algebra, but for groups even the fixed-point-free case remains open. In 2005, Kuzucuoglu BIB014 proved isomorphisms between finitary unitriangular groups and those of associated Lie rings are studied. The author also investigated its exceptional cases. Makarenko BIB015 in 2005, improved the conclusion in Khukhro's theorem stating that a Lie ring (algebra) L admitting an automorphism of prime order p with finitely many m fixed points (with finite-dimensional fixed-point sub-algebra of dimension m) has a sub-ring (sub-algebra) H of nilpotency class bounded by a function of p such that the index of the additive subgroup |L : H| (the co-dimension of H) is bounded by a function of m and p. He proved that there exists an ideal, rather than merely a sub-ring (sub-algebra), of nilpotency class bounded in terms of p and of index (co-dimension) bounded in terms of m and p. In 2008, Suanmali BIB016 used an analogous idea in the theory of group varieties to investigate the varieties of Lie algebras. She considered the exponent bound problem for some varieties of nilpotent Lie algebras and extended Macdonald's results to finite-dimensional Lie algebras over a field of characteristic not 2 and 3. Paul and Sabur Uddin BIB019 in 2010 worked on Lie and Jordan structure in simple gamma rings. They obtained some remarkable results concerning to Lie and Jordan structure. In 2010, Paul and Sabur Uddin BIB020 focused their discussion to the study Lie structure in simple gamma rings. They gave us some structural results of simple gamma rings with Lie ideals. In 2011, Khukhro, Makarenko and Shumyatsky BIB021 developed a Lie ring theory which is used for studying groups G and Lie rings L with a metacyclic Frobenius group of automorphisms F H. Wilson in 2013 introduced three families of characteristic subgroups that refined the traditional verbal subgroup filters, such as the lower central series, to an arbitrary length. It was proved that a positive logarithmic proportion of finite p-groups admit at least five such proper nontrivial characteristic subgroups whereas verbal and marginal methods explained only one. The placement of these subgroups in the lattice of subgroups is naturally recorded by a filter over an arbitrary commutative monoid M and induces an M -graded Lie ring. These Lie rings permit an efficient specialization of the nilpotent quotient algorithm to construct automorphisms and decide isomorphism of finite p-groups. In 2013, Khukhro and Makarenko BIB022 discovered that the representation theory arguments are used to bound the index of the fitting subgroup. Lie ring methods are used for nilpotent groups. A similar theorem on Lie rings with a metacyclic Frobenius group F H of automorphisms was also proved. In 2014, Horn and Zandi BIB023 the aim in their paper is to gave an explicit description of the cohomology group H 2 (L, A) and to show how its elements correspond one-to-one to the equivalence classes of central extensions of the Lie algebra L with the module A, where we regard A as abelian Lie ring. More recently in 2015, Wilson BIB024 generalized the common notion of descending and ascending central series. The descending approach determines a naturally graded Lie ring and the ascending version determines a graded module for this ring. He link derivations of these rings to the automorphisms of a group.
Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A new catalyst component and its use with an organoaluminum compound, which component is a brown solid of high surface area and large pore volume comprising beta titanium trichloride and a small amount of an organic electron pair donor compound. This solid when used in conjunction with an organoaluminum compound to polymerize alpha-olefins produces product polymer at substantially increased rates and yields compared to present commercial, purple titanium trichloride while coproducing reduced amounts of low-molecular-weight and, particularly, amorphous polymer. Combinations of this new catalyst component and an organoaluminum compound can be further improved in their catalytic properties by addition of small amounts of modifiers, alone and in combination. Such combinations with or without modifiers show good sensitivity to hydrogen used as a molecular weight controlling agent. The combinations are useful for slurry, bulk and vapor phase polymerization of alpha-olefins such as propylene. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> One finds in the literature a thorough-going discussion of rings without radical and with minimal condition for left ideals (semi-simple rings). For the structure of a ring whose quotient-ring with respect to the radical is semi-simple one can refer to the investigations of K6the (see K below). In this paper we shall examine the structure of a ring A with radical R D 0 and with minimal condition for left ideals ("general" MLI ring). The key-stone of our investigations is the fact that the radical of A is nilpotent, and this result we shall establish in ?1. In ?2 we shall prove that the sum of all minimal non-zero left ideals is a completely-reducible left ideal 9A, and in ?3 we shall examine the distribution of idempotent and nilpotent left ideals in 9)1. In ??4-6 we shall discuss the two "extreme cases": (1) when A is nilpotent, and (2) when A is idempotent. For a non-nilpotent A we shall prove that the existence of either a right-hand or a left-hand identity is sufficient for the existence of a composition series of left ideals of A. If A is any MLI ring, one can find a smallest exponent k such that Ak = Ak+l = ... . In ?7 we show that A is the sum of Ak (which is idempotent) and a nilpotent MLI ring. We wish to emphasize the fact that A is to be regarded throughout as a ring without operators. In ?8, however, we shall see that some of our most interesting results are valid for operator domains of a certain type. We conclude the Introduction with an explanation of our notation and terminology. Rings and subrings will usually be denoted by roman capitals; we shall use gothic letters when it is desirable to emphasize the fact that a subring is an ideal. By the statement "a is a left (right) ideal of A" we shall mean that a is an additive abelian group which admits the elements of A as left-hand (right-hand) operators. Observe that our definition does not' imply that a is a subring of A. The term "left ideal," with no qualifying phrase, will always mean "left ideal of the basic ring." A ring with minimal condition for left (right) ideals which are contained in itself will be called an MLI (MRI) ring. Finally we point out that if a and b are subrings of A, then [a, b] denotes the cross-cut of a and b, while (a, b) represents the compound (join) of a and b-i.e. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An ink jet printer includes apparatus for preventing ink clogs from interfering with the flow of ink from a printing nozzle during a printing operation. The ink-declogging apparatus includes a vacuum pump having a vacuum chamber and a member movable with respect to such chamber to adjust the pressure therein. A printing nozzle is operably coupled to the chamber when the printing nozzle is not being used in a print operation. A stepper motor controls the position of the movable member relative to the chamber to selectively provide at least two different preset levels of vacuum (suction) to the printing nozzle. Having the capability of controlling the level of vacuum applied to the printing nozzle, a high vacuum need only be applied in situations warranting its use (e.g. to remove ink clogs), and the waste of ink (by an unnecessarily high vacuum) can be avoided. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An energy efficient general purpose injection molding employs a hydraulic drive for injection and clamp machine functions and an electric drive for screw recovery. Both drives use AC "squirrel cage" induction motors under vector control. Speed command signals for the vector controls are generated by the machine's controller utilizing state transition and predictive signal techniques to account for motor and motor/pump response latencies. Hydraulic drive efficiency is improved by varying motor speed/pump output to match cycle requirements to retain hydraulic drive advantages for mold clamp and injection functions while improving the electric drive performance for screw recovery. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> In this paper we shall show that N. Jacobson's definition of the radical of an (associative) ring [J1, B] applies to alternative rings [Z1, M], and we shall develop some of the elementary properties of this radical. The radical of an alternative ring was first discussed by M. Zorn [Z2] under certain finiteness assumptions. Dubisch and Perlis [D P] have more recently studied the radical of an alternative algebra (of finite order). We make no finiteness assumptions. We do, however, prove that the chain conditions employed by Zorn [Z2, (4.2.1)(4.2.3)] ensure that the radical defined by him coincides with that defined in this paper. Our discussion applies equally well to algebras of possibly infinite order [JI, ?6] so that our results essentially contain some of the results of Dubisch and Perlis. We have been unable to discover the relation between the radical and maximal right ideals, nor have we developed any parallel for Jacobson's structure theory of associative rings. The enlarged radical of Brown and McCoy [B-McC] will yield a type of structure theory for arbitrary non-associative rings, but because of the generality involved we prefer to leave a discussion of this interesting fact to a subsequent publication. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A compression device for exerting pressure on an arm, shoulder, and/or trunk of a patient in need thereof (for example, a patient with hyperalgia or recovering from surgery in which the lymphatic system is affected), including an arm compression hose, a shoulder part for exerting pressure on the shoulder and trunk area, and a band-shaped fastening means for positioning the shoulder part and exerting pressure on the shoulder part. The arm compression hose exerts a pressure that decreases from a maximum pressure at the wrist or hand to a minimum pressure near the shoulder end of the arm, where the minimum pressure is approximately 70% of the maximum pressure. One or more lining pockets can be constructed on the inner lining of the compression device, where each lining pocket can hold one or more compression pads to increase tissue pressure in one or more body areas in need thereof. The compression pads each can have a shape that approximately conforms to the shape of the body part to which it is applied. The shoulder part can also have a shape that approximately conforms to the contour of the shoulder/trunk area to which it is applied. In addition, compression pants can be prepared with lining pockets for receiving compression pads. In one embodiment, compression pants include one or more donut-shaped pads or equivalents thereof that are placed in one or more lining pockets, each of which surrounds one or more osteoma openings. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all a, x in 21. I t is clear that associative algebras are alternative. The most famous examples of alternative algebras which are not associative are the so-called Cayley-Dickson algebras of order 8 over $. Let S be an algebra of order 2 over % which is either a separable quadratic field over 5 or the direct sum 5 ©3There is one automorphism z—>z of S (over %) which is not the identity automorphism. The associative algebra O = 3~\~S with elements <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A small reed switch having a glass tubularly shaped envelope containing a pair of reed contacts is modified to ensure its being biased to either an open or a closed position. A donut-shaped piece of a rubber-bonded, barium-ferrite, magnetic material is circumferentially mounted on the reed switch at a position where its magnetic field influences the contacts to an open position or a closed position. Thusly arranged, an actuating magnet, having a sufficient field at a single pre-established distance from the contacts, actuates the switch from all radial directions from the switch. Potting the modified switch in an epoxy resin further ensures a greater reliability and makes it ideal for implantations in laboratory animals. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> One of the ways of gaining an insight into the nature of a class of rings is to determine all the simple ones. In the case of associative rings some restriction, such as the existence of maximal or minimal right ideals is usually made in order to characterize the simple ones, for otherwise one encounters seemingly pathological examples. The theory of simple alternative rings is therefore incongruous in that it presents no additional difficulties. More precisely, if one defines a ring to be simple provided it has no proper two-sided ideals and is not a nil ring then the main result of this paper may be stated as follows: A simple alternative ring is either a Cayley-Dickson algebra or associative. Thus it would seem that the distinction between alternative and associative rings is really insignificant. Besides the machinery developed in [3], a new identity plays a vital part in the argument. This identity asserts that in any alternative ring fourth powers of commutators associate with any pair of elements of the ring. In the original version of the author's argument this identity was proved with the additional assumption of simplicity. Thanks to R. H. Bruck, who modified that argument, the hypothesis of simplicity is now superfluous. Such an identity is likely to be a useful device in the study of general alternative rings. The next stepping stone is an adaptation of A. A. Albert's result [21, in the form of Theorem 2.8 of this paper. It is used to reduce the proof of the main theorem to a consideration of simple alternative rings in which the fourth power of every commutator is zero. Such rings have no nilpotent elements, from which one infers that they are fields. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all w, x, y and showed by example that (1.1) can fail to hold. Prior to this, Kleinfeld [1 ] generalized the Skornyakov theorem in another direction by assuming only the absence of one sort of nilpotent element. We now specify Kleinfeld's result in detail. Let F be the free nonassociative ring generated by xi and x2 and suppose that R is any right alternative ring. Kleinfeld calls t, u, v in R an alternative triple if (i) there exist elements a [xl, X2], 3 [Xl, x2], 7Y [xl, X2] in F and elements ri, r2 in R such that t =a [ri, r2 ], u =-1 [ri, r2], v = y [ri, r2] and (ii) if si and S2 are elements from an arbitrary alternative ring, and if t'=a [sl, S2], u'=3[sl, S2], v'-=Y[sl, S2], then (t', u', v') =0. The ring R is said to have property (P) if t, u, v an alternative triple in R and (t, u, V)2=0 imply (t, u, v) =0. By the definition of an alternative triple, an alternative ring has property (P). Kleinfeld's result is the converse, assuming characteristic not two; that is, a right alternative ring of characteristic not two is alternative if (and only if) it has property (P). We herein extend this line of investigation by proving that a right alternative ring of characteristic two, satisfying (1.1), is alternative if (and only if) it has property (P). The methods are mainly those used in [2], coupled with two essential lemmas (numbered 4 and 5 in our paper) due to Kleinfeld. Following [2], we say that R is strongly right alternative if R is a right alternative ring satisfying (1.1) . Throughout the paper, R will always denote such a ring, with the additional hypothesis that R have characteristic two. <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for every a, bER. R. L. San Soucie [4] calls a nonassociative ring R strongly right alternative in case its right multiplications a': x-*xa satisfy (I) and (II). (Every right alternative ring in which 2a=0 implies a=0 is strongly right alternative just as (I) implies (II) under the same assumption in the associative case.) In a recent paper [5] we developed some identities for Jordan homomorphisms in the associative case which were extensions of those previously given. (See, especially, I. N. Herstein [1].) It turns out that the nonassociative analogues of these identities are useful in reducing strongly right alternative rings to alternative rings. To do this one invokes the property (P0) If x, y, zER and (x, y, Z)2=0, then (x, y, z) =0 provided x has one of the forms y, [y, z], [y, z]y, (y, y, z) or provided z=wy and x= (y, y, w). (Property (P0) is the operational form of E. Kleinfeld's Property (P) [3].) We give in this note a proof that (I), (II) and (P0) imply that R is an alternative ring. This result subsumes those of Kleinfeld [3] and of San Soucie [4] in this connection. Our main interest is not in the slightly greater generality of this result but rather in the method of proof. The reader will find, we hope, that our proof is straightforward and relatively brief. We have made our presentation self-contained but we have relegated some simple computations, most of which are strictly analogous to those given by us in [5], to an Appendix. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> THEOREM. A simple right alternative ring R which is not associative is alternative if and only if R has an idempotent e sub that there are no nilpotent elements in R,(e) and R,(e). Because simple alternative rings that are not associative are known to be Cayley-Dickson algebras [5] one easily sees that the condition is necessary. Most of the paper is taken up with establishing that it is also sufficient. In the process we prove the following more general result. <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An example is presented that ansevers in the negative the question of whether the square of every commutator need always lie in the nucleus. Also, we show the existence of specific nilpotent elements in the free alternative ring on four or more generators, and prove abstractly the existence of an ideal I≠0, and I2=0. <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all elements x and a. Throughout this paper, all rings will be assumed to have characteristic prime to 2 and 3. If a right alternative ring A has an idempotent, it has an Albert [l] decomposition A = A,(e) + A,,,(e) + Ao(e), where A,(e) and A,(e) are closed under Jordan product. We show that if A has a decomposition where A,(e)+ and A,(e)+ are simple Jordan algebras and A satisfies a few other conditions, then A is alternative or A = A,(e) @ A,,(e) (direct sum). The Albert decomposition with respect to an idempotent e is useful in right alternative rings provided it can be stretched to a Peirce decomposition. This means it is necessary for (e, e, A) = 0. We simply assume our idempotent has this property. Our theorem is: <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A number of papers have appeared with the purpose of generalizing Albert’s long-standing result [l, 21 that a simple right alternative algebra of finite dimension over a field of characteristic f2 that has a unit element e and an idempotent c # e is necessarily alternative. Until now results depended on rather strong additional assumptions such as other identities [S, 14, 17, 36, 371 or internal conditions on the algebra [9, 10, 12, 15, 27, 28, 301. Essential progress has been achieved by Micheev who showed that the identity (x, x, Y)~ = 0 holds in 2-torsion free right alternative algebras [27]. This paper starts collecting information on two natural concepts in a right alternative algebra R, the submodule M generated by all alternators (x, x, y), and a new nucleus N, . The later sections deal mainly with results on simple right alternative algebras. A simple 2-torsion free right alternative algebra is either alternative, hence associative or a Cayley algebra over its center, or the following statements hold: <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> The characterization by J. Levitzki of the prime radical of an associative ring R as the set of strongly nilpotent elements of R is adapted here to apply to a wide class of nonassociative rings. As a consequence it is shown that the prime radical is a hereditary radical for the class of alternative rings and that the prime radical of an alternative ring coincides with the prime radical of its attached Jordan ring. In 1951 J. Levitzki characterized the prime radical of an associative ring R as the set of all elements r E R such that every m-sequence beginning with r ends in zero [3]. An m-sequence was defined to be a sequence {ao, a1,.... an ,... } such that ai C aij1 Rai_1 for i = 1, 2 .... Recently, C. Tsai [9] has given a similar characterization for the prime radical of a Jordan ring. Here we extend this characterization to the class of all s-rings and, as a consequence, are able to show that the prime radical is a hereditary radical on the class of alternative rings (i.e., if A is an ideal of an alternative ring R, then P(A) = A n P(R)) and that P(R) = P(R+) for all 2 and 3-torsion free alternative rings R. Although it is not known whether the prime radical is hereditary on the class of all s-rings, a partial result in this direction is obtained. Recall that a not necessarily associative ring R is called an s-ring for a positive integer s if As is an ideal of R whenever A is an ideal of R (As denotes the set of all sums of products a, a2 ... aS for ai E A under all possible associations). An ideal P of R is called a prime ideal if whenever A1 A2 ... As 5 P then Ai 5 P for some i. Here A1A2 ...As denotes the product of the ideals under all possible associations. The prime radical, P(R), of R is the intersection of all prime ideals of R and can be characterized as the set of all elements r E R such that every complementary system M of R which contains r also contains 0. A set M in R is a complementary system if whenever A1,A2, ...As ares ideals of R such thatAi n M# 0for i = 1, 2, ...,s, then (A1A2 ... As) n M # 0 [6], [8], [10]. To make this article self-contained we mention the following three properties of P(R) which hold for any s-ring R. Proofs can be found in [6], [8] and [10]. (a) P(R) = 0 if and only if R contains no nonzero nilpotent ideals. (b) P(R/P(R)) = 0. Received by the editors May 29, 1975. AMS (MOS) subject classifications (1970). Primary 17D05, 17E05. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> It is known that the socle of a semiprime Goldie ring is generated by a central idempotent and that a prime Goldie ring with a nonzero socle is a simple artinian ring. We prove the extension of these results to alternative rings. We also give an analogue of Goldie's theorem for alternative rings. A Goldielike theorem was obtained earlier by the authors for noetherian alternative rings by a quite different method. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> We determine the nilpotent right alternative rings of prime power oirder pn n ≥ 4, which are not left alternative. Those which are strongly right alternative become Bol loops under the circle operation. The smallest Bol circle loop has order 16. There are six such loops, all of which appear to be new. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> ABSTRACT We study the notion of a (general) left quotient ring of an alternative ring and show the existence of a maximal left quotient ring for every alternative ring that is a left quotient ring of itself. <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> In this paper we develop a Fountain–Gould-like Goldie theory for alternative rings. We characterize alternative rings which are Fountain–Gould left orders in semiprime alternative rings coinciding with their socle, and those which are Fountain–Gould left orders in semiprime artinian alternative rings. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> I n this paper we prove that if R is a semiprime and purely non-associative right alternative ring, then N = C. Also we show that the right nucleus N r = C if R is purely non-associative provided that either R has no locally nilpotent ideals or R is semi prime and finitely generated mod N r <s> BIB021 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> We introduce a notion of left nonsingularity for alternative rings and prove that an alternative ring is left nonsingular if and only if every essential left ideal is dense, if and only if its maximal left quotient ring is von Neumann regular (a Johnson-like Theorem). Finally, we obtain a Gabriel-like Theorem for alternative rings. <s> BIB022 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> Let D be a mapping from an alternative ring R into itself satisfying D(ab) = D(a)b + aD(b) for all a; b 2 R. Under some conditions on R, we show that D is additive. <s> BIB023 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> Some properties of the right nucleus in generalized right alternative rings have been presented in this paper. In a generalized right alternative ring R which is finitely generated or free of locally nilpotent ideals, the right nucleus Nr equals the center C. Also, if R is prime and Nr  C, then the associator ideal of R is locally nilpotent. Seong Nam [5] studied the properties of the right nucleus in right alternative algebra. He showed that if R is a prime right alternative algebra of char. ≠ 2 and Right nucleus Nr is not equal to the center C, then the associator ideal of R is locally nilpotent. But the problem arises when it come with the study of generalized right alternative ring as the ring dose not absorb the right alternative identity. In this paper we consider our ring to be generalized right alternative ring and try to prove the results of Seong Nam [5]. At the end of this paper we give an example to show that the generalized right alternative ring is not right alternative. <s> BIB024
To the best of our knowledge the first detailed discussion about alternative rings was started in 1930 by the German author Zorn. An alternative ring R is defined by the system of identities: (ab)b = a(bb) (right alternativeness) and (aa)b = a(ab) (left alternativeness) for all a, b ∈ R. In 1930, Zorn BIB001 mentioned the theorem of Artin which states that every two elements of an alternative ring generate an associative sub-ring. By a result of Zorn BIB001 , it was observed that the only not associative summands permitted are merely finite Cayley-Dickson algebras (which is the first example of alternative rings) with divisors of zero. In 1933, Zorn discussed also the finite-dimensional case in alternative rings. In 1935, Moufang proved a generalization for alternative division rings: if (a, b, c) = 0, then a, b, c generate a division sub-ring which is associative. For more details regarding finite dimensional case the readers are referred to the contribution of Jacobson , Albert , Schafer BIB006 BIB007 Dubisch and Perlis . In 1943, Schafer studied the alternative division algebras of degree two which is independent of Zorn's results. In 1946, Forsythe and McCoy BIB003 gave an approach that an associative regular ring without nonzero nilpotent elements is a sub-direct sum of associative division rings is easily extendable to alternative rings. In 1947, Smiley BIB004 studied alternative regular rings without nilpotent elements and proposed an approach that every alternative algebraic algebra which has no nilpotent elements is the sub-direct sum of alternative division algebras. Kaplansky in 1947 presented many of the preliminary results which were valid at least for special alternative rings. Smiley BIB005 in 1948 studied the concept of radical of an alternative ring and discussed the radicals of infinite order algebras and was also able to show that the Jacobson's definition of the radical of an associative ring is applied to alternative ring. In 1948, Kaplansky [125] also obtained the Cayley numbers as the only not associative alternative division ring which was both connected and locally connected, and he gave a conjecture that a similar result holds in the totally disconnected, locally compact case. A ring is defined to be right alternative in case ab.b − a.bb = 0 is an identical relation in the ring. Right alternative algebras were first studied by Albert in 1949 he showed that a semi-simple, right alternative algebra over a field of characteristic 0 is alternative. In 1950, Brown and McCoy suggested that every alternative ring has a greatest regular ideal. Also in 1950, in the work of Skornyakov provided a full description of alternative but not associative division rings. He showed that each such division ring is an algebra of dimension 8 over some field. Later in 1951, Bruck and Kleinfeld BIB008 proved the result of Skornyakov , independently. In 1951, Skornyakov proposed that the study of alternative rings in general began with the study of alternative division rings, which in the theory of projective planes play the role of the so-called natural division rings of alternative. Another result concerns right alternative division rings, which are of geometrical interest since they arise as coordinate systems of certain projective planes in which a configuration weaker than desargue's is assumed to hold. In this connection Skorniakov in 1951 has made known that a right alternative division ring of characteristic not 2 is alternative. Some attention had been given to right alternative rings when Skornyakov in 1951 established the result that every right alternative division ring is alternative. In 1952, Albert proved the results for simple alternative rings and his proposed results were based on the properties which were given by Zorn BIB001 . Kleinfeld in 1953 proved that for the alternativity of a right alternative ring it is sufficient that [x, y, z] 2 = 0 implies [x, y, z] = 0. Kleinfeld BIB009 in 1953 proved that even simplicity (that is, not having two-sided ideals) of an alternative but not associative ring implies that the ring is a Cayley-Dickson algebra. In 1953, Kleineelo proved that right alternative rings without nilpotent elements are known to be alternative it follows that free right alternative rings with two or more generators have non-zero nilpotent elements. In 1955, Kleinfeld strengthened his results by proving that any alternative but not associative ring, in which the intersection of all the two-sided ideals is not a nil ideal, is Cayley-Dickson algebra over some field. Hence the class of alternative rings is much larger than the class of associative rings. San Soucie BIB010 in 1955 studied alternative and right alternative rings in characteristic 2 (2x = 0) and also proved that if R is right alternative division ring of characteristic two. Then R is alternative if and only if R satisfies w(xy − x) = (wx − y)x. In 1957, Kleinfeld proved very interesting identity: [(ab − ba) 2 , c, d](ab − ba) = 0 and he also showed that in the free alternative ring there are zero divisors. Smiley BIB011 in 1957 analyzed the proof of Kleinfeld and noticed that it is sufficient to check only these cases: x = y, x = yz − zy, x = (yz − zy)y, x = [y, y, z], or z = wy and x = [y, y, w] for some w. To study of free right alternative rings he said that it was one of the main tasks of the theory of alternative rings. In 1960, Hashimoto introduced the notion of * -modularity of right ideals of an alternative rings and showed a connection between the intersection of all the * -modular maximal right ideals and the radical SR(A) in an alternative ring A. In 1963, it was shown by Kleinfeld that in an arbitrary alternative ring the fourth power of every commutator lies in the nucleus. Also Dorofeev in 1963 proved that in a free alternative ring with six or more generators there exist elements a, b, c, d, r, s such that ((a, b)(c, d) + (c, d)(a, b), r, s) = 0. In 1965, Slater asserted that a prime alternative ring R of characteristic not 3 that is not associative can be embedded in a Cayley-Dickson algebra over the quotient field of the center of R. In 1967, Humm BIB012 discussed a necessary and sufficient condition for a simple right alternative ring to be alternative. He assumed that the characteristic is not 2 or 3 in all that follows. The treatment required an idempotent e in R and used the subspaces R 1 (e) and R 0 (e) of the Albert decomposition . In 1967, Humm and Kleinfeld BIB013 investigated that with the help of an example that square of every commutator need always lie in the nucleus. Also, they showed the existence of specific nilpotent elements in the free alternative ring on four or more generators, and proved abstractly the existence of an ideal I = 0, and I 2 = 0. Slater in 1967 in his paper on nucleus and center in Alternative rings considered R is any alternative ring, N its Nucleus and Z its center. Moreover, he investigated the natural conditions on R which were the weakest possible to ensure. Also applied the results to amplify comments by Humm and Kleinfeld work on free alternative rings and contained examples of alternative rings. Slater in 1968 discussed the ideals in semiprime alternative rings and also the results of the paper, so far concerned that a given right ideal A, did not require semiprimeness of R. In 1969, Kleinfeld worked on right alternative rings without proper right ideals he showed that a right alternative ring R without proper right ideals, of characteristic not two, containing idempotents e and 1,e = 1, such that ex = e(ex) for all x ∈ R must be alternative and hence a cayley vector matrix algebra of dimension 8 over its center. Moreover, Slater in 1969, proved the natural extension to arbitrary rings of the classical Wedderburn-Artin theorem for associative ones. Also considered the special case where R is in addition purely alternative; that is, has no nonzero nuclear ideals. He also listed virtually all the radicals that have been proposed for (alternative) rings in the literature, and showed that on the class of rings with D.C.C. they all coincide. Also he discussed analogous for arbitrary rings with D.C.C. of the classical results concerning idempotents in associative rings with D.C.C. In 1970, Slater discussed the class of admissible models. Since a prime ring need not be algebra over a field, so keeping in view, the author intended to extend the class of admissible models at least slightly. For example, the Cayley integers are a prime ring that is not Cayley-Dickson algebra, much as an integral domain is prime but need not be a field. Moreover, he defined a Cayley-Dickson ring (CD ring) R to be a ring that can be imbedded in a certain natural way in CD algebra R over the quotient field Z of the (nonzero) center Z of R. He then later said that if R is cancellative alternative but not associative (and of char = 2) then R is a CD ring such that R is a CD division algebra. The added generality in the paper comes from the fact that a prime ring may have zero divisors. If R is prime with zero divisors [and not associative, and 3R = (0)] then R will be a split CD algebra, instead of a CD division algebra. Again in 1970, Slater discussed localization results on ideals and right ideals of prime and weakly prime rings. Also he showed that if some exceptional weakly prime ring exists, then there exists an exceptional prime ring having a collection of properties which taken together. Finally, he gave examples to show that if some exceptional ring exists, then the restrictions on characteristic imposed in most of the results were not excessive. Slater in 1970 proved the natural extension to alternative rings of the classical Wedderburn-Artin theorem for semiprime associative rings, considered the extension to arbitrary alternative rings of the classical methods, as well as the secondary results of the classical associative theory. Also he discussed some parallel conditions in alternative theory to the classical connection between primitive idempotents and minimal right ideals. He also examined the relation between the present results and the classical structure theory established by Zorn. In 1970, Slater discussed that the main facts about the minimal ideals and minimal right ideals of an associative ring are well known. In this paper he also proved corresponding results for an alternative ring R. He made no restriction on the characteristic of R, but will often impose restrictions of semiprimeness type. Slater in 1971 were concerned mainly with the extension to arbitrary (alternative) rings of Hopkins theorem BIB002 that in an associative ring with D.C.C. on right ideals the (say, nil) radical is nilpotent. He also reworked and modified Zhevlakov's arguments to obtain nilpotence of S(R) without restriction on characteristic. It turns out that much of the work was done more simply by working with two-sided ideals, as opposed to the right ideals used by Zhevlakov. As a consequence, a substantial part of the work was done with the assumption of D.C.C. only on two-sided ideals, and the result on S(R) appeared as an easy corollary of this work. On the way he also improved the result that a ring R with D.C.C. on two-sided ideals any solvable ideal is nilpotent by allowing Baer-radical ideals in place of solvable ideals. In 1971, Hentzel BIB014 discussed the characteristics of right alternative rings with idempotents, he also assumed that all the rings to have characteristic prime to 2 and 3. In his paper he also used the Albert decomposition for idempotents for right alternative rings. In 1971, Kleinfeld discussed that alternative as well as Lie rings satisfy all of the following four identities : (i) (x 2 , y, z) = x(x, y, z) + (x, y, z)x,(ii) (x, y 2 , z) = y(x, y, z) + (x, y, z)y,(iii) (x, y, z 2 ) = z(x, y, z) + (x, y, z)z, (iv)(x, x, x) = 0, where the associator (a, b, c) is defined by (a, b, c) = (ab)c − a(bc). He also proved that if R is a ring of characteristic different from two and satisfies (iv) and any two of the first three identities, then a necessary and sufficient condition for R to be alternative is that whenever a, b, c are contained in a sub-ring S of R which can be generated by two elements and whenever (a, b, c) 2 = 0, then (a, b, c) = 0. Also all such division rings must be alternative and hence either Cayley-Dickson division algebras or associative. Also Kleinfeld in 1971 investigated rings R of characteristic different from two. The main results were concerned that either rings which have an idempotent e = 1, or those which have no nilpotent elements. He also proved that whenever R is simple and contains an idempotent e = 1, then R must be alternative and hence either a cayley vectormatrix algebra or associative. In 1975, Thedy in his paper BIB015 analyzed the two natural concepts in a right alternative algebra R, the sub-module M generated by all alternators (x, x, y), and a new nucleus N . The later sections of his study dealt mainly with results on simple right alternative algebras. A simple 2-torsion free right alternative algebra is either alternative, hence associative or Cayley algebra over its center. Also in 1975, the work of Hentzel was dealt with a GRA (generalized right alternative) ring R. It was shown that I is an ideal of R, that I is commutative, and that I is the sum of ideals of R whose cube is zero. This means that if R is simple, or even nil-semisimple, and then R is right alternative. Since all the hypotheses on R are consequences of the right alternative law, showing that R is right alternative is as strong a result. Also he considered that the ideal generated by each associator of the form (a, b, b) is a nilpotent ideal of index at most three. Miheev in 1975 constructed a finite-dimensional, prime, right alternative nil algebra with nilpotent heart. Thus a prime right alternative ring need not be s-prime. In 1976, Rich BIB016 discussed the characterization by Levitzkiin 1951 of the prime radical of an associative ring R as the set of strongly nilpotent elements of R was adapted to apply to a wide class of non-associative rings. As a consequence it was shown that the prime radical is a hereditary radical for the class of alternative rings and that the prime radical of an alternative ring coincides with the prime radical of its attached Jordan ring. In 1978, Rose first gave the brief introduction of Cayley-Dickson algebra. He then axiomatized split Cayley-Dickson algebras over algebraically closed fields and showed that this theory is ℵ 1 -categorical, model complete, and the model completion of the theory of Cayley-Dickson algebras and stability in alternative rings. He also generalized ℵ 0 -categoricity in associative rings to ℵ 0 -categoricity in alternative rings. In 1980, Wene in his paper characterized those associative rings with involutions in which each symmetric element is nilpotent or invertible. Analogous results were obtained for alternative rings. The restriction was further relaxed to require only that each symmetric element is nilpotent or some multiple is a symmetric idempotent. Widiger in 1983 considered the class of all alternative rings in which every proper right ideal is maximal. Moreover, he used the theory of artinian rings for his study. Kleinfeld in 1983 examined that a semiprime alternative ring can have no nonzero anti-commutative elements. However, this was not so for prime right alternative rings in general. In 1988, Essannouni and Kaidi proved the natural extension to alternative rings of the classical Goldie theorem for semiprime associative rings. In 1994, Essannounia and Kaidi BIB017 discovered that the socle of a semiprime Goldie ring is generated by a central idempotent and that a prime Goldie ring with a nonzero socle is a simple artinian ring. They also extended these results to alternative rings. They had given an analogue of Goldie's theorem for alternative rings. A Goldie like theorem was obtained earlier by the authors for noetherian alternative rings by a quite different method. Also in 1994, Kleinfeld and Smith discussed that a ring is called s-prime if the 2-sided annihilator of a nonzero ideal must be zero. In particular, any simple ring or prime (−1, 1) ring is s-prime. Also, a nonzero s-prime right alternative ring, with characteristic = 2, cannot be right nilpotent. In 2000, Goodaire BIB018 developed that for a right alternative ring R, the magma (R, •) is right alternative, that is, (x • y) • y = x • (y • y), and if R is strongly right alternative, then (R, •) is a Bol magma with neutral element 0. Moreover, in 2001, Goodaire showed that in a strongly right alternative ring with unity, it was known that if U (R) is closed under multiplication, then U (R) is a Bol loop. Kenneth Kunen and Phillips in 2005 partially answered two questions of Goodaire by showing that in a finite, strongly right alternative ring, the set of units (if the ring is with unity) is a Bol loop under ring multiplication, and the set of quasi-regular elements is a Bol loop under circle multiplication. Again in 2005, Cárdenas et.al., BIB019 studied the notion of a (general) left quotient ring of an alternative ring and showed the existence of a maximal left quotient ring for every alternative ring that is a left quotient ring of itself. In 2007, Lozano and Molina BIB020 developed a fountain Gould-like Goldie theory for alternative rings. They characterized alternative rings which were Fountain-Gould left orders in semiprime alternative rings coinciding with their socle, and those which were Fountain-Gould left orders in semiprime artinian alternative rings. Furthermore, Bharathi et al., BIB021 in 2013 proved that if R is a semiprime and purely non-associative right alternative ring, then N = C. They also showed that the right nucleus N r = C if R is purely non-associative provided that either R has no locally nilpotent ideals or R is semi-prime and finitely generated mod N r . In 2014, Cárdenas et al., BIB022 introduced a notion of left non-singularity for alternative rings and proved that an alternative ring is left non-singular if and only if every essential left ideal is dense, if and only if its maximal left quotient ring is von Neumann regular. Finally, they obtained a Gabriel-like Theorem for alternative rings. Ferreira and Nascimento BIB023 in 2014 proved the relationship between the multiplicative and the additive structures of a ring that became an interesting and active topic in ring theory. They focused their discussion on the special case of an alternative ring. In this they investigated the problem of when a derivable map must be an additive map for the class of alternative rings. Recently, in 2015, Satyanarayana et al., proved that the peculiar property of nucleus N in an alternative ring R i.e. nucleus contracts to centre C when alternative ring is octonion and nucleus expands to whole algebra when the alternative ring is associative. Also in 2015, Jayalakshmi and Latha BIB024 presented some properties of the right nucleus in generalized right alternative rings. Also they showed that in a generalized right alternative ring R which is finitely generated or free of locally nilpotent ideals, the right nucleus N r equals the center C. They also considered the ring to be generalized right alternative ring and tried to prove the results of Ng Seong-Nam . On the way they gave an example to show that the generalized right alternative ring is not right alternative.
Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The primary aim of this paper is to study mappings J of rings that are additive and that satisfy the conditions ::: ::: $$ {\left( {{a^2}} \right)^J} = {\left( {{a^J}} \right)^2},\;{\left( {aba} \right)^J} = {a^J}{b^J}{a^J} $$ ::: ::: (1) ::: ::: Such mappings will be called Jordan homomorphisms. If the additive groups admit the operator 1/2 in the sense that 2x = a has a unique solution (1/2)a for every a, then conditions (1) are equivalent to the simpler condition ::: ::: $$ {\left( {ab} \right)^J} + {\left( {ba} \right)^J} = {a^J}{b^J} + {b^J}{a^J} $$ ::: ::: (2) ::: ::: Mappings satisfying (2) were first considered by Ancochea [1], [2](1). The modification to (1) is essentially due to Kaplansky [13]. Its purpose is to obviate the necessity of imposing any restriction on the additive groups of the rings under consideration. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> In a previous paper [4](1) we have defined a special Jordan ring to be a a subset of an associative ring which is a subgroup of the additive group and which is closed under the compositions a→a 2and (a, b)→aba. Such systems are also closed under the compositions (a, b) → ab+ba= {a, b} and (a, b, c) → abc+cba. The simplest instances of special Jordan rings are the associative rings themselves. In our previous paper we studied the (Jordan) homomorphisms of these rings. These are the mappings J of associative rings such that ::: ::: $$ {\left( {a + b} \right)^J} = {a^J} + {b^J},\;{\left( {{a^2}} \right)^J} = {\left( {{a^J}} \right)^2},\;{\left( {aba} \right)^J} = {a^J}{b^J}{a^J} $$ ::: ::: (1) ::: ::: A second important class of special Jordan rings is obtained as follows. Let \( H \) be an associative ring with an involution a → a *, that is, a mapping a→a * such that ::: ::: $$ {\left( {a + b} \right)^*} = {a^*} + {b^*},\;{\left( {ab} \right)^*} = {b^*}{a^*},\;{a^{**}} = a $$ ::: ::: (2) ::: ::: Let \( H \) denote the set of self-adjoint elements h = h *. Then is a special Jordan ring. In this paper we shall study the homomorphisms of the rings of this type. It is noteworthy that the Jordan rings of this type include those of our former paper(2). <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> Herbicidal cyanoalkanapyrazoles of the formula: (I) where N IS 3, 4, OR 5; Q is -O-CH3 or -S(O)m-CH3; where m is 0, 1 or 2; and R1 is hydrogen or methyl; V is hydrogen, fluorine or chlorine; X is fluorine, chlorine, bromine, iodine, cyano or methoxy; Y is hydrogen, fluorine, or chlorine; and Z is hydrogen or fluorine; PROVIDED THAT: <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> A conveyor mechanism for automatically removing articles, such as cheeses, floating in a liquid includes a horizontal section adapted to be positioned below the surface of the liquid and an integral upwardly inclined section. Articles that float upon the horizontal section are moved to the inclined section which lifts them out of the liquid and delivers them to an elevated point. The articles are prevented from jamming the conveyor by moving stepped walls located on either side of the horizontal section at the location where jamming is likely to occur. The stepped walls move longitudinally back and forth 180 DEG out of phase with one another to gently bump and align the articles. In the preferred embodiment, a single motor drives the conveyor and rotates cam wheels which move the stepped walls. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The main purpose of this paper is to give an external characterization of the Levitzki radical of a Jordan ring 2f as the intersection of a family of prime ideals W. This characterization coincides with that of associative rings which was given by Babic in [1I]. Applying this characterization, it is easy to see that the Levitzki radical of a Jordan ring contains the prime radical of the same ring. For associative rings the same statement is well known, since the prime radical in associative rings is called the Baer radical. If the minimal condition on ideals holds on Jordan ring 2, then the Levitzki radical, L(2f), and the prime radical, R(2f) of 2f coincide. Throughout this paper, any Jordan ring 2f, that is a (nonassociative) ring satisfying (1) ab = ba, and (2) a2(ab) =a(a2b) for all a, b in 2t, and any of its subrings satisfy the conditions, (3) 2a = 0 implies a = 0 and (4) if a is in a subring C of 2 then there exists a unique element x in C such that 2x = a. In a Jordan ring, the following identity (*) is well known. One can find the proof in [3 ]. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The object of this paper is to examine some radical properties of quadratic Jordan algebras and to show that under certain conditions, R(3) = 3 nR(3) where Q3 is an ideal of a quadratic Jordan algebra 3, R(3) is the radical of Q3, and R(3) is the radical of 3. 1. Preliminaries. We adopt the notation and terminology of an earlier paper [2] concerning quadratic Jordan algebras (defined by the quadratic operators UJ) as opposed to linear Jordan algebras (defined by the linear operators Lx). Thus we have a product U,y linear in y and quadratic in x satisfying the following axioms as well as their linearizations: (UQJ I) U1=I (1 the unit); (UQJ II) U (u)y = Ux U U x; (UQJ III) UXV',X =VX ,UX (VXgz={xyz}1=UX,y). Throughout this paper 3 will denote a quadratic Jordan algebra over an arbitrary ring of scalars (D. Define a property R of a class of rings (e.g. associative rings or Jordan rings) to be a radical property it it satisfies the following three conditions [1]: (a) Every homomorphic image of an R ring is again an R ring. (b) Every ring 3 contains an R ideal R(13) which contains every other R ideal of 3. The maximal R ideal R(Q3) is called the R radical of 3. (c) For Q3 an ideal of 3, if Q3 and 3/Q3 are R rings, then so is 3. An immediate consequence of this definition is R(Q/R(Q))=O. If R(f=O, 0Q3 is said to be R semisimple. Many well-known radical properties, but not all, satisfy a further condition: (d) Every ideal of an R ring is again an R ring (i.e. property R is inherited by ideals of an R ring). If a radical property satisfies condition (d) that property is called hereditary. Received by the editors August 2, 1971. AMS 1969 subject classifications. Primary 1740. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> This volume contains the proceedings of the Third International Conference on Non-Associative Algebra and Its Applications, held in Oviedo, Spain, July 12-17, 1993. The conference brought together specialists from all over the world who work in this field. All aspects of non-associative algebra are covered. Topics range from purely mathematical subjects to a wide spectrum of applications, and from state-of-the-art articles to overview papers. This collection should point the way for further research. The volume should be of interest to researchers in mathematics as well as those whose work involves the application of non-associative algebra in such areas as physics, biology and genetics. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> In this book, Kevin McCrimmon describes the history of Jordan Algebras and he describes in full mathematical detail the recent structure theory for Jordan algebras of arbitrary dimension due to Efim Zel'manov. To keep the exposition elementary, the structure theory is developed for linear Jordan algebras, though the modern quadratic methods are used throughout. Both the quadratic methods and the Zelmanov results go beyond the previous textbooks on Jordan theory, written in the 1960's and 1980's before the theory reached its final form. ::: ::: This book is intended for graduate students and for individuals wishing to learn more about Jordan algebras. No previous knowledge is required beyond the standard first-year graduate algebra course. General students of algebra can profit from exposure to nonassociative algebras, and students or professional mathematicians working in areas such as Lie algebras, differential geometry, functional analysis, or exceptional groups and geometry can also profit from acquaintance with the material. Jordan algebras crop up in many surprising settings and can be applied to a variety of mathematical areas. ::: ::: Kevin McCrimmon introduced the concept of a quadratic Jordan algebra and developed a structure theory of Jordan algebras over an arbitrary ring of scalars. He is a Professor of Mathematics at the University of Virginia and the author of more than 100 research papers. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The aim of this paper is to offer an overview of the most important applications of Jordan structures inside mathematics and also to physics, up-dated references being included. For a more detailed treatment of this topic see - especially - the recent book Iordanescu [364w], where sugestions for further developments are given through many open problems, comments and remarks pointed out throughout the text. ::: Nowadays, mathematics becomes more and more nonassociative and my prediction is that in few years nonassociativity will govern mathematics and applied sciences. ::: Keywords: Jordan algebra, Jordan triple system, Jordan pair, JB-, JB*-, JBW-, JBW*-, JH*-algebra, Ricatti equation, Riemann space, symmetric space, R-space, octonion plane, projective plane, Barbilian space, Tzitzeica equation, quantum group, B\"acklund-Darboux transformation, Hopf algebra, Yang-Baxter equation, KP equation, Sato Grassmann manifold, genetic algebra, random quadratic form. <s> BIB009
In modern mathematics, an important notion is that of non-associative structure. This kind of structures is characterized by the fact the product of elements verifies a more general law than the associativity law. Jordan structures were introduced in 1932-1933 by the German physicist Pasqual Jordan (1902 Jordan ( -1980 in his algebraic formulation of quantum mechanics. The study of Jordan structures and their applications is at present a wide-ranging field of mathematical research. The systematic study and more developments of general Jordan algebras were started by Albert in 1946. One can define a Jordan ring as a commutative non-associative ring that respects the Jordan identity i.e. (xy)(xx) = x(y(xx)). In 1948, Jacobson observed that semi-isomorphisms were nothing more or less than ordinary isomorphisms of the non-associative Jordan ring determined by the given associative ring. In his paper he introduced the Jordan multiplication a.b = 1/2(ab + ba), he observed that if ordinary multiplication is replaced by this identity then one can obtained Jordan ring determined by the associative ring. He also determined the isomorphisms between any two simple Jordan rings. Jacobson in 1948 in his paper discussed about the centre of nonassociative ring i.e.; If is any non-associative ring one can defined the center of to be the totality of elements c that commute, c.a = a.c. It was also observed that if a ring contains a nilpotent element in its center then it contains a nilpotent two-sided ideal. In 1950, Jacobson and Rickart BIB001 defined a special Jordan ring to be a subset of an associative ring which is a subgroup of the additive group and which is closed under the compositions a → a 2 and (a, b) → aba. Such systems are also closed under the compositions (a, b) → ab + ba = {a, b} and (a, b, c) → abc + cba. The simplest instances of special Jordan rings were the associative rings themselves. The authors also studied the (Jordan) homomorphism of these rings. Jacobson and Rickart BIB002 in 1952 considered the set H of self-adjoint elements h = h * . Then that set H is a special Jordan ring. In this paper they studied the homomorphism of the rings of this type. They also obtained an analogue of the matrix method for the rings H. Authors proved that any Jordan homomorphism of H can be extended to an associative homomorphism of U . They also examined that this result can be extended to locally matrix rings and in this form it is applicable to involutorial simple rings with minimal one-sided ideals. On the way they obtained the Jordan isomorphisms of the Jordan ring of self-adjoint elements of an involutorial primitive ring with minimal one-sided ideals onto a second Jordan ring of the same type. However, comparatively Schafer in 1955 began the study of the class of so-called non-commutative J-rings (Jordan rings). The study of this class of rings is contained in the theory of algebras of finite dimension. For more details readers were referred to study . In 1956, Hall and Jr established the identity {aba} 2 = {a{ba 2 b}a} which hold in abstract Jordan rings. This was immediate for special Jordan rings. They examined that the identity is proved by finding a partial basis for the free Jordan ring with two generators, the basis being found for all elements of degree at most 5 and for elements of degree 4 in a and degree 2 in b. Herstein in 1957 gave us the idea of derivation of Jordan ring. He mentioned that for any associative ring A, from its operations and elements a new ring can be obtained, that is the Jordan ring of A, by defining the product in the ring to be a o b = ab + ba for all a, b ∈ A. In 1958, Shirshov has made a detailed discussion of non-associative structures including Jordan rings. He also constructed some special Jordan rings. In 1963, Brown pointed out a problem of interest in non-associative algebras, regarding the study of generalized Cayley algebras and exceptional simple Jordan algebras which were closely related to the exceptional simple Lie algebras. In his work, he defined a new class of simple non-associative algebras of dimension 56 over their centers and possessing nondegenerate trace forms, such that the derivations and left multiplications of elements of trace zero generate Lie algebras of type E 7 . Moreover, in 1964, Kleinfeld gave the concept of middle nucleus and center in simple Jordan ring. He established the result that in a simple Jordan ring of characteristic = 2 the middle nucleus and center coincide. McCrimmon BIB003 in 1966 discussed about the structure, characteristics and general theory of Jordan rings. A Jordan ring (i.e., algebra over the ring of integers) is called non-degenerate if it has no proper absolute zero divisors. He also described that a Jacobson ring is a Jordan ring such that, the descending chain condition holds for Peirce quadratic ideals, and each nonzero Peirce quadratic ideal contains a minimal quadratic ideal. These rings play a role in the Jordan theory analogous to that played by the artinian rings in the associative theory. In 1968, Tsai pointed up that there were several definitions of radicals for general non-associative rings given in literature. The u-prime radical of Brown-McCoy which was given in was similar to the prime radical in an associative ring. However, it depends on the particular chosen element u. The purpose of the paper was to project a definition for the Brown-McCoy type prime radical for Jordan rings so that the radical will be independent from the element chosen. Tsai in 1969 proved that in any Jordan ring J there exists a maximal Von Neumann regular ideal M . The existence of such an ideal in an associative ring A is a well known. In fact, M could be characterized as the set of all elements a in A such that any element in the principal ideal in A generated by a is a regular element. He also had shown that the same characterization holds for Jordan rings. Also, in 1969, McCrimmon established a self-contained proof which does not depend on the classification of simple rings. The author has taken motivation for this proof from the work of Jacobson BIB004 in which he has provided the proof in which he used the structure theory to reduce the problem to the case of simple rings, and then proceeded to check the result for each of the various types of simple rings that can occur. Furthermore, in 1970, Meyberg established a proof of Fundamental-Formula which is considered to be a very important in Jordan rings and given a comparatively short proof of Fundamental-Formula as first it was given by Jacobson BIB004 . Osborn in 1970 presented three related theorems, one on the structure of Jordan rings in which every element is either nilpotent or invertible, and two on the structure of associative rings with involution in which every symmetric element is either nilpotent or invertible. The first of these theorems was a generalization of a well-known result on the structure of Jordan algebras which stated that if each element of Jordan algebra can be expressed as the sum of a nilpotent element and a scalar multiple of 1, then the nilpotent elements of J form an ideal. Also Tsai BIB005 in 1970 analyzed that an external characterization of the Levitzki radical of a Jordan ring U as the intersection of a family of prime ideals U . He also discussed that by applying this characterization, it was easy to see that the Levitzki radical of a Jordan ring contains the prime radical of the same ring. For associative rings the same statement was well known, since the prime radical in associative rings was called the Baer radical. If the minimal condition on ideals holds on Jordan ring U , then the Levitzki radical, L(U ), and the prime radical, R(U ) of U coincide. In 1971, McCrimmon derived a general structure theory for noncommutative Jordan rings. He defined a Jacobson radical and showed it coincides with the nil radical for rings with descending chain condition on inner ideals; semisimple rings with D.C.C. were shown to be direct sums of simple rings, and the simple rings to be essentially the familiar ones. In addition he also obtained results, which seem to be new even in characteristic = 2, concerning algebras without finiteness conditions. He also showed that an arbitrary simple non-commutative Jordan ring containing two nonzero idempotent whose sum is not 1 is either commutative or quasi-associative. Erickson and Montgomery in 1971 observed the special Jordan ring R + , and when R has an involution and R is associative ring, the special Jordan ring S of symmetric elements. They first showed that the prime radical of R equals the prime radical of R + , and that the prime radical of R intersected with S is the prime radical of S. Also they gave an elementary characterization, in terms of the associative structure of R, of primeness of S. Finally, they proved that a prime ideal of R intersected with S is a prime Jordan ideal of S. Also, in 1971, Shestakov considered the class of non-commutative Jordan rings. This class generalized the class of rings introduced by Block and Thedy . Also he demonstrated, for rings of the given class, a theorem on nilpotency of null rings with a maximality condition for sub-rings and for anti-commutative rings satisfying the third Engel condition . Moreover, he generalized nilpotency of finite-dimensional null algebras of the corresponding classes. Also shown that two sufficiently broad subclasses of the class of rings considered, there exists a locally nilpotent radical. He also considered finite-dimensional non-commutative Jordan algebra. In 1972, Lewand BIB006 examined some radical properties of quadratic Jordan algebras and showed that under certain conditions an ideal of a quadratic Jordan algebra is the radical. In 1973, Britin restricted his attention to the Jordan ring of symmetric elements of an associative ring with involution. Although he considered the problem of integral domains in this restricted case and his main result was more general. He used the approach via Goldie's theorem for associative rings i.e.; T has a ring of quotients which is semi-simple Artinian if and only if T is semi-prime, contains no infinite direct sum of left ideals and satisfies A.C.C. on left annihilator ideals. He observed that if one replaced semi-prime by prime, then replaced semi-simple by simple. Then it can be shown that the conditions put on left ideals are implied by A.C.C. or D.C.C. on left ideals, when T has an involution. In 1974, Britin he obtained a Jordan ring of quotients for H(R) by observing that if R be a 2-torsion free semiprime associative ring with involution. Conditions are put on the Jordan ring H(R) of symmetric elements which imply the existence of a ring of quotients which is a direct sum of involution simple artinian rings. Montgomery in 1974 studied the concept of quotient rings in a special class of Jordan rings. It is worth mentioning that this concept was not developed in Jordan algebra before. In his work, he showed that if R is an associative ring with involution and J is a Jordan sub-ring of the symmetric elements containing the norms and traces of R, then if J is a Jordan domain with the common multiple property, J has a ring of quotients which is Jordan division algebra. Also, Ng Seong-Nam [211] in 1974 generalized the result of Osborn which was basically proved for associative rings with involution. But Seong-Nam generalized the result for non-associative Jordan rings with involution. In addition, Loustao in 1974 established some results regarding radical extensions of Jordan rings. Along the way, he proved analogies for Jordan rings of commutativity results for associative rings found in . Further, he also extended commutativity results from to associative division algebras with involution whose symmetric elements are a radical extension of a commutative sub-algebra. In 1979, Petersson completed the solution of the classification problem for locally compact Jordan division rings initiated in . He also examined that a locally compact non-discrete Jordan division ring and a finite dimensional Jordan division algebra over that field. He also considered the centroid of a locally compact non-discrete field. Moreover, in 1986, Slinko in his article described the structure of a connected component of a locally compact alternative or Jordan ring. It was shown that each locally compact semiprime alternative or Jordan ring is a topological direct sum of its zero connected component, which is a semisimple finite-dimensional algebra over R and a totally disconnected locally compact semiprime ring. This result can be viewed as a far reaching generalization of the classical Pontryagin theorem on connected associative locally compact skew fields. Furthermore, it was also proved that a connected locally compact alternative or Jordan ring having no nonzero idempotents is nilpotent and also established that the quasi-regular radical of an alternative or Jordan locally compact ring is closed. In 1986, Gonzalez et al., introduced the order relation in Jordan rings, he proved that the relation ≤ defined by x ≤ y if and only if xy = x 2 , x 2 y = xy 2 = x 3 is an order relation for a class of Jordan rings and proved that a Jordan ring R is isomorphic to a direct product of Jordan division rings if and only if ≤ is a partial order on R such that R is hyperatomic and orthogonally complete. Later, in 1987, Garijo discussed the Jordan regular ring associated with finite JBW-algebra. In this paper, he showed that every finite JBW-algebra A is contained in a Von Neumann regular Jordan ring A such that A has no new idempotents. Moreover, he proved that every finite JBW-algebra has the common multiple property (non-associative analogous to the Ore condition) and that a is the (unique) total ring of quotients of A. Hentzel and Peresi [84] in 1988 introduced almost Jordan rings. He proved that any Jordan ring with characteristic = 2, 3 satisfies the identity: 2((ax)x)x + a((xx)x) = 3(a(xx))x along with commutativity implies the Jordan identity in any semiprime ring. In 1988, Slinko generalized the result of Petersson that any continuous Jordan division ring is finite-dimensional over its centroid. Secondly, he proved the condition of the solvability of the equations xU a = b, for a = 0. These conditions were actually required for the definition of Jordan division ring. In 1993, Chuvakov proved that in the class non-commutative Jordan rings satisfying the identity ([x, y] , z, z) = 0 for an arbitrary radical r, any ideal of an r-semisimple ring is r-semisimple. Thus the problem of heredity of a radical r in the class is equivalent to the problem of r-radicality of any ideal of an r-radical ring. He also proved that in the class of non-commutative Jordan rings M a locally-nilpotent radical is hereditary. For more and intrinsic study the readers are referred to the excellent books by Braun and Koecher [14] in 1966, Jacobson BIB004 in 1968 and McCrimmon BIB008 in 2004, on Jordan algebras which contain substantial material on general non-associative algebras. Also some relative research work can be found in the Proceedings of the international conferences on non-associative algebra and its applications BIB007 BIB007 . In 2011, Radu BIB009 gave us an overview of the most important applications of Jordan structures inside mathematics and also to the physics. Nowadays, mathematics becomes more and more non-associative and the author predicts in his paper that in few years non-associativity will govern mathematics and applied sciences.
Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> A shaped charge liner is made by forming a pair of hollow conical sub-liners, without any uncontrolled residual tangential shear stress (e.g. by a deep drawing method) so that they mate together to form a single conical liner. One sub-liner is inserted in the other and then one pair of subliner ends are locked together. Thereafter, the sub-liners are counter-rotated about their attachment point and locked together at their other pair of ends to retain the counter-rotation. This counter rotation generates opposite residual tangential shear stresses in the two sub-liners which may or may not be equal to each other. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> A waterproof pressure sensitive adhesive laminate is provided in which a flexible plastics backing sheet is coated with a bituminous adhesive composition containing a minor proportion of rubber or thermoplastic polymer. The backing sheet is reinforced with a mesh or a woven or non-woven fabric which is embedded in the sheet and provides substantial resistance to stretching. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be a loop, written multiplicatively, and F an arbitrary field. Define multiplication in the vector space A, of all formal sums of a finite number of elements in L with coefficients in F, by the use of both distributive laws and the definition of multiplication in L. The resulting loop algebra A(L) over F is a linear nonassociative algebra (associative, if and only if L is a group). An algebra A is said to be power associative if the subalgebra FI[x] generated by an element x is an associative algebra for every x of A. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this paper, we show that certain well known theorems concerning units in integral group rings hold more generally for integral loop rings which are alternative. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> This paper first settles the “isomorphism problem” for alternative loop rings; namely, it is shown that a Moufang loop whose integral loop ring is alternative is determined up to isomorphism by that loop ring. Secondly, it is shown that every normalized automorphism of an alternative loop ringZ L is the product of an inner automorphism ofQ L and an authomorphism ofL. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let ZL denote the integral alternative loop ring of a finite loop L. If L is an abelian group, a well-known result of G. Higman says that ±g,g € L are the only torsion units (invertible elements of finite order) in ZL . When L is not abelian, another obvious source of units is the set ±y~l gy of conjugates of elements of L by invertible elements in the rational loop algebra QL . H. Zassenhaus has conjectured that all the torsion units in an integral group ring are of this form. In the alternative but not associative case, one can form potentially more torsion units by considering conjugates of conjugates V^\y7(g7l)V\ and so forth. In this paper we prove that every torsion unit in an alternative loop ring over Z is ± a conjugate of a conjugate of a loop element. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this paper, the authors continue their investigation of loops which give rise to alternative loop rings. If the coefficient ring has characteristic 2, these loops turn out to form a surprisingly wide class, in contrast to the situation of characteristic ≠ 2. This paper describes many properties of this class, includes diverse examples of Moufang loops which are united by the fact that they have loop rings which are alternative, and discusses analogues in loop theory of a number of important group theoretic constructions. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The purpose of this paper is to exhibit a class of loops which have strongly right alternative loop rings that are not alternative.. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Contents. Preface. Introduction. I. Alternative Rings. Fundamentals. The real quaternions and the Cayley numbers. Generalized quaternion and Cayley-Dickson algebras. Composition algebras. Tensor products. II. An Introduction to Loop Theory and to Moufang Loops. What is a loop? Inverse property loops. Moufang loops. Hamiltonian loops. Examples of Moufang loops. III. Nonassociative Loop Rings. Loop rings. Alternative loop rings. The LC property. The nucleus and centre. The norm and trace. IV. RA Loops. Basic properties of RA loops. RA loops have LC. A description of an RA loop. V. The Classification of Finite RA loops. Reduction to indecomposables. Finite indecomposable groups. Finite indecomposable RA loops. Finite RA loops of small order. VI. The Jacobson and Prime Radicals. Augmentation ideals. Radicals of abelian group rings. Radicals of loop rings. The structure of a semisimple alternative algebra. VII. Loop Algebras of Finite Indecomposable RA Loops. Primitive idempotents of commutative rational group algebras. Rational loop algebras of finite RA loops. VIII. Units in Integral Loop Rings. Trivial torsion units. Bicyclic and Bass cyclic units. Trivial units. Trivial central units. Free subgroups. IX. Isomorphisms of Integral Alternative Loop Rings. The isomorphism theorem. Inner automorphisms of alternative algebras. Automorphisms of alternative loop algebras. Some conjectures of H.J. Zassenhaus. X. Isomorphisms of Commutative Group Algebras. Some results on tensor products of fields. Semisimple abelian group algebras. Modular group algebras of abelian groups. The equivalence problem. XI. Isomorphisms of Loop Algebras of Finite RA Loops. Semisimple loop algebras. Rational loop algebras. The equivalence problem. XII. Loops of Units. Reduction to torsion loops. Group identities. The centre of the unit loop. Describing large subgroups. Examples. XIII. Idempotents and Finite Conjugacy. Central idempotents. Nilpotent elements. Finite conjugacy. Bibliography. Index. Notation. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> An RA loop is a loop whose loop rings, in characteristic different from 2, are alternative but not associative. In this paper, we show that every finite subloop H of normalized units in the integral loop ring of an RA loop L is isomorphic to a subloop of L. Moreover, we show that there exist units -yi in the rational loop algebra QL such that y ( ( 1 (.. 1 'H-l)'Y2) ... )-y-Yk C L. Thus, a conjecture of Zassenhaus which is open for group rings holds for alternative loop rings (which are not associative). <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this note, the authors offer a specific construction of loops whose loop rings are right, but not left, alternative. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Abstract G. Higman has proved a classical result giving necessary and sufficient conditions for the units of an integral group ring to be trivial. In this paper we extend this result to loop rings of some diassociative loops. <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> We prove the isomorphism problem for integral loop rings of finitely generated RA loops using a decomposition of the loop of units. Also we describe the finitely generated RA loops whose loops of units satisfy a certain property. <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The right alternative law implies the left alternative law in loop rings of characteristic other than 2. We also exhibit a loop which fails to be a right Bol loop, even though its characteristic 2 loop rings are right alternative. <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Given a loopL and a commutative associative ringR with 1, one forms the loop ring RL just as one would form a group ring if L were a group. The theory of group rings has a long and rich history. In this paper, we sketch the history of loop rings which are not associative from early results of R. H. Bruck and L. J. Paige through the more recent discovery of alternative and right alternative rings and the work of O. Chein, D. A. Robinson and the author. 1. Origins Denition 1.1. A loop is an algebraic structureNL; O with a two-sided identity element such <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> It is observed that the additive as well as multiplicative Jordan decompositions hold in alternative loop algebras of finiteRA loops and theRA loops for which the additive Jordan decomposition holds in the integral loop ring are characterized. Multiplicative Jordan decomposition (MJD) inZL, whereL is a finiteRA loop with cyclic centre is analysed, besides settling MJD for integral loop rings of allRA loops of order ≤32. It is also shown that for any finiteRA loopL,U (ZL) is an almost splittable Moufang loop. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be an RA loop, that is, a loop whose loop ring in any characteristic is an alternative, but not associative, ring. We find necessary and sufficient conditions for the (Moufang) unit loop of RL to be solvable when R is the ring of rational integers or an arbitrary field. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Disclosed herein is an exhaust system for a two-cycle internal combustion engine including a rotatable crankshaft, first and second cylinders firing 180 DEG apart, and first and second exhaust ports communicating with the first and second cylinders respectively, the exhaust system comprising a substantially Y-shaped hollow exhaust pipe having first, second and third branches each having an open end and each being substantially equal in length to the distance an acoustical wave will travel through the exhaust pipe during an interval over which the crankshaft rotates through substantially ten to twenty degrees of rotation at a predetermined engine speed, the open ends of the first and second branches being adapted to be coupled to the exhaust ports of the first and second cylinders respectively. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be an RA loop, that is, a loop whose loop ring in any characteristic is an alternative, but not associative, ring. Let f : L → {±1} be a homomorphism and for α = ∑αll in the integral loop ring ZL, define αf = ∑αlf(l)l-1. A unit u ∈ ZL is said to be f-unitary if uf = ±u-1. The set of all f-unitary units is a subloop of , the loop of all units in ZL. In this paper, we find necessary and sufficient conditions for to be normal in . <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Abstract Possession of a unique nonidentity commutator/associator is a property that dominates the theory of loops whose loop rings, while not associative, nevertheless satisfy an “interesting” identity. For instance, until now, all loops with loop rings satisfying the right Bol identity (such loops are called SRAR) have been known to have this property. In this paper, we present various constructions of other kinds of SRAR loops. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The existence of loop rings that are not associative but which satisfy the Moufang or Bol identities is well known. Here we complete work started 25 years ago by establishing the existence of loop rings that satisfy any identity of "Bol–Moufang" type (without being associative). As it turns out, with one exception, loop rings satisfying an identity of Bol–Moufang type all satisfy a Moufang or Bol identity. We also highlight some similarities and differences in the consequences of several Bol–Moufang identities as they apply to loops and rings. <s> BIB021
Historically, the concept of a non-associative loop ring according to our knowledge was first introduced in a paper by Bruck in 1944 BIB002 . Non-associative loop rings appeared to have been little more than a curiosity until the 1980s when the author found a class of non-associative Moufang loops whose loop rings satisfy the alternative laws. One can defined loop ring as given a loop L and a commutative associative ring R with 1, one forms the loop ring RL just as one would form a group ring if L were a group. In the construction of RL the binary operations addition "+" and multiplication "." are defined as follows In 1946, Bruck revealed that the group ring result about the centre had a natural extension and he established a result regarding the centre of loop algebra, i.e.; the centre of loop algebra is spanned by conjugacy class sums. He also proved that a loop RL is associative (commutative) if and only if L is associative (commutative). In 1955, Paige BIB003 gave a striking example of phenomenon that the associative and commutative identities are very special, however in general, an identity in L does not lift to RL and an identity on RL imposes much more than simply the same identity on L. He also proved that if R is a ring of characteristic relatively prime to 30 and L is a loop such that RL is commutative and power associative, then L is a group. In 1959, Hall did an excellent work on right Moufang loops. A Moufang loop is a loop which satisfies this right Moufang identity: ((xy)z)y = x(y(zy)). The Moufang identity is named for Ruth Moufang who discovered it in some geometrical investigations in the first half of this century . Later, in 1974, Chein discovered that any group is a Moufang loop, but here is a family of Moufang loops which are not associative. In 1983, Goodaire BIB009 proved that if the Moufang identity on L extends to a loop ring RL, then RL must be an alternative ring. In 1985, Chein and Goodaire presented the method of constructing all RA loops, one which begins with the class of abelian groups possessing 2-torsion. They further determined when two RA loops constructed by this method are isomorphic. In particular, they determined when two non-isomorphic groups with property LC can both be embedded as index two sub-loops in the same RA loop. Subsequently, in 1986, Goodair and Chein worked with collaboration and yielded more satisfying information about RA (right alternative) loops. Soon after, Goodaire and Parmenter BIB004 in 1986 demonstrated that the certain well known theorems concerning units in integral group rings holds more generally for integral loop rings which are alternative. Afterwards, in 1987, Goodaire and Parmenter endeavored to establish conditions which guarantee the semi-simplicity of alternative loop rings with respect to any nil radical and with respect to the Jacobson radical. In 1988, Goodaire and Milies BIB005 first suggested to settle the isomorphism problem for alternative loop rings, it was shown that a Moufang loop whose integral loop ring is alternative is determined up to isomorphism by that loop ring. Secondly, it was shown that every normalized automorphism of an alternative loop ring ZL is the product of an inner automorphism of QL and an automorphism of L. Additionally, in 1989, Goodaire and Milies BIB006 established that every torsion unit in an alternative loop ring over Z is ± a conjugate of a conjugate of a loop element. They also assumed that ZL denotes the integral alternative loop ring of a finite loop L. It is a well-known result of Higman BIB001 that if L is an abelian group then ±g, g ∈ L are the only torsion units (invertible elements of finite order) in ZL. When L is not abelian, another obvious source of units is the set ±γ −1 gγ of conjugates of elements of L by invertible elements in the rational loop algebra QL. In the alternative but not associative case, one can form potentially more torsion units by considering conjugates of conjugates γ −1 2 gγ 2 )γ 1 and so forth. Furthermore, Chein and Goodaire BIB007 in 1990 continued their investigation of loops which gave rise to alternative loop rings. If the coefficient ring has characteristic2, these loops turn out to form a surprisingly wide class, in contrast to the situation of characteristic = 2. This paper described many properties of this class, includes diverse examples of Moufang loops which were united by the fact that they had loop rings which were alternative, and discussed analogues in loop theory of a number of important group theoretic constructions. In 1992, Vasantha Kandasamy introduced a new notion in loop rings KL called normal elements of the loop ring KL. An element x ∈ KL is called a normal element of KL if αKL = KLα. If every element of KL is a normal element of KL and called KL the normal loop ring, also defined normal sub loop rings. Vasantha Kandasamy in 1994 investigated a notion called strict right loop ring. He defined that if L be a loop and R a commutative ring with 1. The loop ring RL is called the strict loop ring if the set of all ideals of RL is ordered by inclusion. He also gave a class of loop rings, which were not strict loop rings. Moreover, Goodaire and Robinson BIB008 in 1994 exhibited a class of loops which have strongly right alternative loop rings that are not alternative. And they also proved fundamental propositions which generalized the necessary and sufficient conditions for a loop to have a strongly right alternative loop ring. Beside this in 1995, Vasantha Kandasamy studied the mod p envelope of associative structure. He discussed the case of non-associative groups which were loops. That is in his study he replaced groups by loops. Again in 1995, Goodaire and Milies further generalized and discussed few examples of Moufang loops whose loop rings are alternative, but not associative BIB009 . Since that time, there had been a great deal of work devoted to the study of such loops and to their loop rings. In their paper authors gave a brief discussion of those loops whose loop rings are alternative. In 1996, Goodaire and Milies BIB010 considered an RA loop is a loop whose loop rings, in characteristic different from 2, are alternative but not associative. Moreover, authors showed that every finite sub-loop H of normalized units in the integral loop ring of an RA loop L is isomorphic to a sub-loop of L. They also showed that there exist units in the rational loop algebra. Thus, a conjecture of Zassenhaus which was open for group rings holds for alternative loop rings (which were not associative). In addition to this Goodaire and Robinson BIB011 in 1996 proposed the construction of loops L which have right alternative loop rings RL which were not left alternative. The construction generated loop rings RL which are Bol and hence, right alternative merely set z = 1e in the Bol identity (xy.z)y = x(yz.y). Such loop rings are called strongly right alternative as they satisfied the more stringent condition. Barros and Juriaans BIB012 in 1996 discussed that Higman has proved a classical result giving necessary and sufficient conditions for the units of an integral group ring to be trivial. In this paper authors extended this result to a bigger class of diassociative loops which includes abelian groups, groups with a unique non-identity commutator, RA loops, and other classes of loops. Again in 1997, Barros and Juriaans BIB013 proved the isomorphism problem for integral loop rings of finitely generated RA loops using a decomposition of the loop of units. Also they described the finitely generated RA loops whose loops of units satisfy a certain property. In 1998, Kunen BIB014 discussed that the right alternative law implies the left alternative law in loop rings of characteristic other than 2. He also exhibited a loop which failed to be right Bol loop, even though its characteristic 2 loop rings are right alternative. Also in 1999, Goodaire BIB015 sketched the brief history of loop rings which were not associative from early results of Bruck and Paige through the more recent discovery of alternative and right alternative rings and the work of Chein, Robinson and by the Goodaire. In 2001, Bhandari and Kaila BIB016 observed that the additive as well as multiplicative Jordan decompositions hold in alternative loop algebras of finite RA loops and the RA loops for which the additive Jordan decomposition holds in the integral loop ring were characterized. Multiplicative Jordan decomposition (MJD) in ZL, where L is a finite RA loop with cyclic centre is analyzed, besides settling MJD for integral loop rings of all RA loops of order ≤ 32. It was also shown that for any finite RA loop L, µ(ZL) is an almost splittable Moufang loop. Again in 2001, Goodaire and Milies BIB017 considered L be an RA loop, that is a loop whose loop ring in any characteristic is an alternative, but not associative ring. They also investigated necessary and sufficient conditions for the (Moufang) unit loop of RL to be solvable when R is the ring of rational integers or an arbitrary field. On the way Goodaire and Milies BIB018 in 2001 observed that an RA loop has a torsion-free normal complement in the loop of normalized units of its integral loop ring. They also examined whether an RA loop can be normal in its unit loop. Furthermore, in 2002, Nagy showed that the fundamental ideal of loop ring F L is nilpotent if and only if the multiplication group is p-group, where p is prime, L is finite loop of p-power order and F is a field of characteristic p. BIB019 discussed normality of f -unitary units in an alternative loop rings. In this paper, they also found necessary and sufficient conditions for U f (ZL) to be normal in U (ZL) (the loop of all units in ZL) where for U f (ZL) the set of all f -unitary units and U (ZL) is the loop of all units in ZL. Goodaire in 2007 described some of the advances in the theory of loops whose loop rings satisfy interesting identities. He wrote this paper in memory of his friend Robinson with whom he did research. Again in 2007, Goodaire discussed advances in the theory of loops whose loop rings satisfy interesting identities that had taken place primarily since 1998. The major emphasis were on Bol loops that had strongly right alternative loop rings and on Jordan loops a hitherto largely ignored class of commutative loops some of whose loops rings satisfy the Jordan identity (x 2 y)x = x 2 (yx). He raised a number of open questions and includes several suggestions for further research. Doostie and Pourfaraj [40] in 2007 studied the finite rings , and proved that the first one is commuting regular and the second ring contains the commuting regular element and idempotents as well (where p, p 1 and p 2 are odd primes. Moreover, i, m and n are positive integers such that m < n, (m, n) = 1 and (m − 1, n) = 1. They also defined the commuting regular semigroup ring, commuting regular loop ring and commuting regular groupoid ring. In 2008, Chein et al., established some connections between loops whose loop rings, in characteristic 2, satisfy the Moufang identities and loops whose loop rings, in characteristic 2, and satisfy the right Bol identities. Again in 2008, Chein and Goodaire BIB020 discussed that the possession of a unique non-identity commutator or associator was a property that dominates the theory of loops whose loop rings, while not associative, nevertheless satisfy an interesting identity. Furthermore, they also considered all loops with loop rings satisfying the right Bol identity (such loops are called SRAR) have been known to have this property. They presented various constructions of other kinds of SRAR loops. Also considered Bol loops whose left nucleus is an abelian group of index 2 and showed that the loop rings of some such loops were strongly right alternative and exhibited various SRAR loops with more than two commutators. In 2009, Dart and Goodaire BIB021 investigated the existence of loop rings that were not associative but which satisfied the Moufang or Bol identities (without being associative). Their work turned out, with one exception, loop rings satisfying an identity of Bol-Moufang type all satisfy a Moufang or Bol identity. They also highlighted some similarities and differences in the consequences of several Bol-Moufang identities as they applied to loops and rings. Moreover, in 2012, Giraldo Vergara discussed in details the developments of theory of loop rings that has been intrigued mathematicians from different areas. He also mentioned that in recent years, this theory has been developed largely, and as an example of this the complete description of the loop of invertible elements of the Zorn algebra is known to us. Recently, in 2014, Jayalakshmi and Manjula investigated the case where the ring has characteristic 2 and extend to alternative loop rings by proving that the augmentation of order 2n in characteristic 2 is a nilpotent ideal (of dimension 2n − 1). This, of course, means that virtually all the familiar radicals of alternative rings coincide with the augmentation ideal. Also, in 2014, Jayalakshmi and Manjula discussed that the right alternative law implies the left alternative law in loop rings of characteristic other than 2. They also shown that there exists a loop which fails to be an extra loop, even though its characteristic 2 loop rings are right alternative.
Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> In this paper we give the notion of near left almost ring (ab- breviated as nLA-ring) (R, +, ·), i.e. (R, +) is an LA-group, (R, ·) is an LA- semigroup and one distributive property of '·' over '+' holds, where both the binary operations "+" and "·" are non-associative. An nLA-ring is a general- ization of an LA-ring and footed parallel to the near ring. Mathematics Subject Classification: 16A76, 20M25, 20N02 <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> S.J. Choi, P. Dheena and S. Manivasan studied property of quasi- ideals of P-regular nearring. In this page we study property of quasi-ideals of P-regular nLA-ring. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> In this paper, we study left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, we investigate relationships left primary and weakly left primary ideals in LA-rings. Finally, we obtain necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideals in LA- rings. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> The aim of this paper is to characterize left almost rings by congrunces. We show that each homomophism of left amost rings defines a congrucne relation on left almost rings. We then discuss quotient left almiost rings. At the end we prove analogues of the ismorphism theorem for left almost rings. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> Molodtsov developed the theory of soft sets which can be seen as an effective tool to deal with uncertainties. Since the introduction of this concept, the application of soft sets has been restricted to associative algebraic structures (groups, semigroups, associative rings, semi-rings etc). Acceptably, though the study of soft sets, where the base set of parameters is a commutative structure, has attracted the attention of many researchers for more than one decade. But on the other hand there are many sets which are naturally endowed by two compatible binary operations forming a non-associative ring and we may dig out examples which investigate a non-associative structure in the context of soft sets. Thus it seems natural to apply the concept of soft sets to non-commutative and non-associative structures. In present paper, we make a new approach to apply Molodtsov’s notion of soft sets to LA-ring (a class of non-associative ring). We extend the study of soft commutative rings from theoretical aspect. <s> BIB005
After the concept of loop rings (1944), a new class of non-associative ring theory was given by Yusuf in 2006 . Although the concept of LA-ring was given in 2006, but the systematic study and further developments was started in 2010 by Shah and Rehman in their paper . It is worth mentioning that this new class of non-associative rings named Left almost rings (LA-ring) is introduced after a huge gap of 6 decades since the introduction of loop rings. Left almost rings (LA-ring) is actually an off shoot of LA-semigroup and LA-group. It is a noncommutative and non-associative structure and gradually due to its peculiar characteristics it has been emerging as useful non-associative class which intuitively would have reasonable contribution to enhance non-associative ring theory. By an LA-ring, we mean a non-empty set R with at least two elements such that (R, +) is an LA-group, (R, .) is an LA-semigroup, both left and right distributive laws hold. In , the authors have discussed LA-ring of finitely nonzero functions which is in fact a generalization of a commutative semigroup ring. They generalized the structure of commutative semigroup ring (ring of semigroup S over ring R represented as R[X; S] to a nonassociative LA-ring of commutative semigroup S over LA-ring R represented as R[X s ; s ∈ S], consisting of finitely nonzero functions. Nevertheless it also possesses associative ring structures. Furthermore they also discussed the LA-ring homomorphism. On the way the first ever definition of LA-module over an LA-ring was given by Shah and Rehman in the same paper . Later in 2010, Shah et al., introduced the notion of topological LA-groups and topological LA-rings which are some generalizations of topological groups and topological rings respectively. They extended some characterizations of topological groups and topological rings to topological LA-groups and topological LA-rings. In 2011, Shah and Shah established some basic and structural facts of LA-ring which will be useful for future research on LA-ring. They studied basic results such as if R is an LA-ring then R cannot be idempotent and also (a + b) 2 = (b + a) 2 for all a, b ∈ R. If LA-ring R has left identity e then e + e = e, e + 0 = e and e = (e + 0) 2 . If R is a cancellative LA-ring with left identity e then e + e = 0 and thus a + a = 0 for all a ∈ R. An interesting result is that if R is an LA-ring with left identity e then right distributivity implies left distributivity. Also in 2011, Shah et al., promoted the notion of LA-module over an LA-ring defined in and further established the substructures, operations on substructures and quotient of an LA-module by its LA-sub module. They also indicated the non similarity of an LAmodule to the usual notion of a module over a commutative ring. Moreover, in 2011, Shah, Rehman and Raees BIB001 have generalized the concept of LA-ring by introducing the notion of near left almost ring (abbreviated as nLA-ring) (R, +, ·). (R, +) is an LA-group, (R, ·) is an LA-semigroup and one distributive property of "·" over "+" holds, where both the binary operations "+" and "." are non-associative. In continuation to BIB001 , Shah, Ali and Rehman in 2011 characterized nLA-ring through its ideals. They have shown that the sum of ideals is again an ideal, and established the necessary and sufficient condition for an nLA-ring to be direct sum of its ideals. Furthermore, they observed that the product of ideals is just a left ideal. In 2012, Shah and Rehman explored some notations of ideals and M-systems in LAring. They characterized LA-rings through some properties of their ideals. Moreover, they also established that if every subtractive subset of an LA-ring R is semi-subtractive and also every quasi-prime ideal of an LA-ring R with left identity e is semi-subtractive. Also in 2012, Shah et al., investigated the intuitionistic fuzzy normal sub-rings in non-associative rings. In their study they extended the notions for a class of non-associative rings i.e.; LA-ring. They established the notion of intuitionistic fuzzy normal LA-subrings of LA-rings. Specifically they proved that if an IF SA = (µ A , γ A ) is an intuitionistic fuzzy normal LA-subring of an LA-ring R if and only if the fuzzy sets µ A andγ A are fuzzy normal LA-subrings of R. Also they showed that an IF SA = (µ A , γ A ) is an intuitionistic fuzzy normal LA-subring of an LA-ring R if and only if the fuzzy setsμ A and γ A are anti-fuzzy normal LA-subrings of R. In 2013, a notable development was done by Rehman et al., when the existence of LA-ring was shown by giving the non-trivial examples of LA-ring. The authors showed the existence of LA-ring using the mathematical program Mace4. With the existence of nontrivial LA-ring, ultimately the authors were able to abolish the ambiguity about the associative multiplication because the first example on LA-ring given by Yusuf was trivial. Also in 2013, Gaketem BIB002 studied the properties of quasi-ideals of P -regular nLA-ring which is in fact a generalization of LA-ring. In 2014, Alghamdi and Sahraoui broaden the concept of LA-module given in the paper by constructing a tensor product of LA-modules. Although, LA-groups and LA-modules need not to be abelian, the new construction behaves like standard definition of the tensor product of usual modules over a ring. They also then extended some simple results from the ordinary tensor to the new setting. In addition, Yiarayong BIB003 in 2014 studied left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals were obtained. Moreover, the author investigated relationships of left primary and weakly left primary ideals in LA-rings. Finally, he obtained necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideal in LA-rings. Recently, in 2015, Hussain and W. Khan BIB004 characterized LA-rings by congruence relations. They had shown that each homomorphism of LA-rings defines a congruence relation on LA-rings. They also then discussed quotient LA-rings. At the end they proved analogue of the isomorphism theorems for LA-rings. Also Shah and Asima Razzaque in their paper discussed soft non-associative rings and explore some of its algebraic properties. The notions of soft M-systems, soft P-systems, soft I-systems, soft quasi-prime ideals, soft quasi-semiprime ideals, soft irreducible and soft strongly irreducible ideals were introduced and several related properties were investigated. Moreover in 2016, Shah et al., BIB005 taken a step forward to apply the concepts of soft set theory to LA-ring by introducing soft LA-rings, soft ideals, soft prime ideals, idealistic soft LA-rings and soft LA-homomorphism. They provided a number of examples to illustrate these concepts.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> INTRODUCTION <s> Despite the ongoing discussion of the recent years, there is no agreed definition of a ‘smart city’, while strategic planning in this field is still largely unexplored. Inspired by this, the purpose of this paper was to identify the forces shaping the smart city conception and, by doing so, to begin replacing the currently abstract image of what it means to be one. The paper commences by dividing the recent history of smart cities into two large sections – urban futures and the knowledge and innovation economy. The urban futures strand shows that technology has always played an important role in forward-looking visions about the city of the future. The knowledge and innovation economy strand shows that recent technological advancements have introduced a whole new level of knowledge management and innovation capabilities in the urban context. The paper proceeds to explicate the current technology push and demand pull for smart city solutions. On one hand, technology advances rapidly and creates a booming market of <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> INTRODUCTION <s> Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. <s> BIB002
use the terms "global city region" to refer to "a new metropolitan form characterised by sprawling polycentric networks of urban centres …" Such networks are becoming identified with both the potential and the reality of 'smart' city infrastructures of connected transportation, financial, energy, health, information and cultural systems. There are numerous definitions of a "smart city" across the literature with little consensus BIB001 and there are many technical issues involved BIB002 . For the purposes of this paper, the definition provided by the (ISO/IEC 2014) is considered an appropriately inclusive one, that is: The "smartness" of a city describes its ability to bring together all its resources, to effectively and seamlessly achieve the goals and fulfil the purposes it has set itself… [It] enables the integration and interoperability of city systems in order to provide value, both to the city as a whole, and to the individual citizen. This integration further enables potential synergies to be exploited and the city to function holistically, and to facilitate innovation and growth. In the context of such values and goals, there is a global movement in the implementation of smart cities which was catalysed in the Global Forum World Foundation for Smart Communities in 1997. In particular coordinated strategies and standards for smart city implementation are increasingly pervasive and being adopted at national and industry levels. For example, the UK Department of Business, Innovation and Skills commissioned BSI in 2012 to develop a standards strategy for smart cities in order to accelerate and minimise risks in the implementation of smart cities in the UK. In 2011, the European Commission initiated the European Innovation Partnership on Smart Cities and Communities (EC 2015) . In China, comparable initiatives have been established, such as the China Strategic Alliance of Smart City Industrial Technology Innovation. In the United States, the Federal Smart Cities and Communities Task Force is seeking to embed new digital technologies into city and community infrastructures and services. The Australian government has similarly launched a national Smart Cities Plan in 2016 aimed at positioning Australian cities to succeed in the digital economy (Australian Government 2017). Among individual cities themselves, there are examples of smart city plans that are being developed at local and municipal government level. One example is the GrowSmarter (2015) initiative, a collaborative EU funded smart city project, focusing on sustainable solutions to economic, social and environmental issues. The project involves what are termed "Lighthouse Cities" of Stockholm, Cologne and Barcelona. It aims to integrate and demonstrate twelve smart solutions to energy, mobility and infrastructure in collaboration with twenty industrial partners, and importantly the project is intended to create a platform for sharing knowledge and experience. Industry involvement in smart city developments is especially key to such partnerships, and in supporting the technological enablers and connected platforms that underpin smart city infrastructures. Multinational communications and IT companies, Cisco and Nokia are among the industry players who are developing strategic White Papers about the platform components of a successful smart city, and partnering with cities on pilot implementations . Across these standards and strategies is a shared vision to position communities at all scales to have equitable access to connected smart services that can enhance the sustainability and quality of life, improved health and safety, and economic prosperity. Smart cities can help enable virtual collaboration of communities . In the context of this proposed paper, the citizens of a smart city are potential participants in its governance and in the evolving development of smarter services, including those related to accessing and preserving cultural heritage and the arts. Now, however, there are few visible examples of smart cultural initiatives integrated with smart city developments at a pilot or a conceptual level. There is consequently a need to understand how populations can be supported by local capacities and smarter cultural cities and regions, using advanced information systems, visualisation, and applications.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> This paper presents the ARCHEOGUIDE project (Augmented Reality-based Cultural Heritage On-site GUIDE). ARCHEOGUIDE is an IST project, funded by the EU, aiming at providing a personalized electronic guide and tour assistant to cultural site visitors. The system provides on-site help and Augmented Reality reconstructions of ancient ruins, based on user's position and orientation in the cultural site, and realtime image rendering. It incorporates a multimedia database of cultural material for on-line access to cultural data, virtual visits, and restoration information. It uses multi-modal user interfaces and personalizes the flow of information to its user's profile in order to cater for both professional and recreational users, and for applications ranging from archaeological research, to education, multimedia publishing, and cultural tourism. This paper presents the ARCHEOGUIDE system and the experiences gained from the evaluation of an initial prototype by representative user groups at the archeological site of Olympia, Greece. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> Cultural Heritage Areas together Context-Aware Systems present a great opportunity where the Ambient Intelligence (AmI) paradigm can be successfully applied. This paper deals with the design of an AmI-based Information Systems, based on NFC (Near Field Communication) technology, developed to access Cultural Heritage Areas of particular interest, in which different objects of artistic interest can be interfaced in a proper virtual way without affecting the historical environment. The application of non-invasive technology NFC improves the context-awareness of the implemented system and allows users to receive customized information in a transparent way, through the most suitable device, allowing a realistic experience. The proposed AmI-based Information System is particular related to mobile and safe cultural access in the context of Villa Mondragone, an ancient Renaissance Villa. We outline a real system, called SMART VILLA, based on a set of mobile applets, each interfaced with a NFC based subsystem, related to particular sites (SMART BIBLIO for ancient books, SMART ROOM for particular rooms and SMART GARDEN for surrounding historical gardens). <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> Abstract In this paper, we present an ongoing project, named Talking Museum and developed within DATABENC - a high technology district for Cultural Heritage management. The project exploits the Internet of Things technologies in order to make objects of a museum exhibition able to “talk” during users’ visit and capable of automatically telling their story using multimedia facilities. In particular, we have deployed in the museum a particular Wireless Sensor Network that, using Bluetooth technology, is able to sense the surrounding area for detecting user devices’ presence. Once a device has been detected, the related MAC address is retrieved and a multimedia story of the closest museum objects is delivered to the related user. Eventually, proper multimedia recommendation techniques drive users towards other objects of possible interest to facilitate and make more stimulating the visit. As case of study, we show an example of Talking museum as a smart guide of sculptures’ art exhibition within the Maschio Angioino castle, in Naples (Italy). <s> BIB003 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> The relationship between cultural heritage domain and new technologies has always been complex, dialectical and often inspired by the human desire to induce these spaces not created for that purpose, to pursue technological trends, eventually offering to the end-users devices and innovative technologies that could become a ‘dead weight’ during their cultural experiences. However, by means of innovative technological applications and location-based services it is possible to shorten the distance between cultural spaces and their visitors, nowadays determined by the purely aesthetic and essentially passive fruition of cultural objects. This paper presents the design and implementation of a novel multipurpose system for creating single smart spaces , a new concept of intelligent environment, that relies on innovative sensors board named smart crickets and an ad hoc proximity strategy; by following the Internet of Things paradigm the proposed system is able to transform a cultural space in a smart cultural environment to enhance the enjoyment and satisfaction of the involved people. To assess the effectiveness of our solution, we have experienced two real case studies, the first one situated within an art exhibition indoor, and the second one concerning an historical building outdoor. In this way, technology can become a mediator between visitors and fruition, an instrument of connection between people, objects and spaces to create new social, economic and cultural opportunities. <s> BIB004
The concept of "smart cultural heritage," according to researchers of the EU funded DATABENC (Distretto ad Alta Tecnologia per i Beni Culturali) initiative, is about digitally connecting institutions, visitors, and objects in dialogue. Smart heritage focuses on adopting more participatory and collaborative approaches, making cultural data freely available (open), and consequently increasing the opportunities for interpretation, digital curation, and innovation. This offers potential and unprecedented access to cultural artefacts and experiences across distances, in which cultural consumers are no longer passive recipients BIB003 BIB002 BIB004 , Garcia-Crespo 2016 . As described in the Europeana White Paper on smart cities, "cultural heritage defines our identity and our communities. Sharing our past in smart city initiatives has the potential to promote social cohesion and increase innovation and tourism . In this way, smart cultural heritage is strongly associated with the identity of place and communities through smart technologies, knowledge and participation. It is not surprising that the cultural heritage sector has been working within smart requirements for many years due to the inseparable association with location and identity (Chianese et al. 2013) . Projects such as the prototype multimedia guide developed for the archaeological site of ancient Olympia in Greece, ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site Guide) provided augmented reconstructions of the ancient ruins and audio information BIB001 . ARCHEOGUIDE supported a context awareness visitor application, i.e., a location-based application, in which a user's location is identified through a sensing device, and provides the user with information bound to that specific location and the physical objects in the surroundings. The development of context-aware services has been pervasive in demonstrator applications in the cultural heritage areas, not least focused on forms of digital data and user defined interactions BIB004 . With the socio-technical rise of the mobile phone, museums and galleries worldwide developed mobile apps that visitors could download onto their own device and create self-guided tours. The National Gallery in London was one of the first museums to develop LoveArt -an iPhone app launched in 2009 .
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> Cultural Heritage Areas together Context-Aware Systems present a great opportunity where the Ambient Intelligence (AmI) paradigm can be successfully applied. This paper deals with the design of an AmI-based Information Systems, based on NFC (Near Field Communication) technology, developed to access Cultural Heritage Areas of particular interest, in which different objects of artistic interest can be interfaced in a proper virtual way without affecting the historical environment. The application of non-invasive technology NFC improves the context-awareness of the implemented system and allows users to receive customized information in a transparent way, through the most suitable device, allowing a realistic experience. The proposed AmI-based Information System is particular related to mobile and safe cultural access in the context of Villa Mondragone, an ancient Renaissance Villa. We outline a real system, called SMART VILLA, based on a set of mobile applets, each interfaced with a NFC based subsystem, related to particular sites (SMART BIBLIO for ancient books, SMART ROOM for particular rooms and SMART GARDEN for surrounding historical gardens). <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> Cultural Heritage represents a world wide resource of inestimable value, attracting millions of visitors every year to monuments, museums and art exhibitions. A fundamental aspect of this resource is represented by its fruition and promotion. Indeed, to achieve a fruition of a cultural space that is sustainable, it is necessary to realize smart solutions for visitors' interaction to enrich their visiting experience. In this paper we present a service-oriented framework aimed to transform indoor Cultural Heritage sites in smart environments, which enforces a set of multimedia and communication services to support the changing of these spaces in an indispensable dynamic instrument for knowledge, fruition and growth for all the people. Following the Internet of Things paradigm, the proposed framework relies on the integration of a Wireless Sensor Network (WSN) with Wi-Fi and Bluetooth technologies to identify, locate and support visitors equipped with their own mobile devices. <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> The areas of application for augmented reality technology are heterogeneous ::: but the content creation tools available are usually single-user desktop ::: applications. Moreover, there is no online development tool that enables the ::: creation of such digital content. This paper presents a framework for the ::: creation of Cultural Entertainment Systems and Augmented Reality, employing ::: cloud-based technologies and the interaction of heterogeneous mobile ::: technology in real time in the field of mobile tourism. The proposed system ::: allows players to carry out a series of games and challenges that will ::: improve their tourism experience. The system has been evaluated in a real ::: scenario, obtaining promising results. <s> BIB003
Ann Borda & Jonathan P. Bowen 12 different layers of a map at various scales and across thematic layers, and to change the visual appearance of the map, e.g., Google Earth applications. Ann Borda & Jonathan P. Bowen 16 Absent among enabling technologies was the evidence of the use of cloud computing platforms, although there are proposed smart cultural frameworks in the literature that include cloud platforms BIB001 BIB003 . Across the sample studies, it was difficult to determine use of cloud infrastructure due to the lack of available technical literature on the architecture of systems. In the cultural heritage domain, the Europeana Cloud is one of the larger cloud-based infrastructure projects in operation, hosting several million digital items, and supporting data services arising from the Europeana Open Data and associated programs. IoT, as a nearly synonymous term with smart cities, remains an evolving technology, and has not reached an operational level of integration in smart cultural heritage, although there is potential for IoT to underpin various smart cultural services BIB002 . The EU funded DATABENC (2014)
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Enabling technologies <s> Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Enabling technologies <s> Augmented Reality (AR) Mobile Apps are a usefull technology for Cultural Heritage Communication. The interdisciplinary field between Computation, Interaction Design and Heritage Interpretation is allowing the development of innovative case studies. Through an online and offline observation we present a state of the art review of Augmented Reality Apps in Cultural Historical Heritage Communication placing AR as another tool in the broader context of Heritage Interpretation. <s> BIB002
At the time of this paper, there are no published standards specific to smart cultural heritage projects as there is available for smart cities, such as those developed by ISO/IEC or the IEEE Smart Cities Initiative (IEEE 2017). However, there are some advances towards developing platforms for smart cultural heritage utilising enabling technologies that underline smart city implementation. Among the enabling technologies, mobile broadband is pervasive across the case study examples in use and/or in access. The cultural heritage sector has been an early adopter of mobile technologies in user engagement and the visitor experience in the development of mobile apps. It is also the most accessible and available of the technologies to the broadest spectrum of users, irrespective of their location BIB002 BIB001 ). Wireless Sensor Networks (WSN) are another layer of infrastructure that is increasingly common supporting different smart scenarios. Smartphone tours and devices that are context-aware figure in most of the examples, such as the indoor digital trails of the O-Device and Journey of Inspiration, and city trails of Paisatge and StreetMuseum. The application of NFC technology provides a more fine-grained context-awareness that allows users to receive customised information and a more realistic experience in close proximity, e.g., users can read or listen to comprehensive guides about landmarks they discover, while watching animations or playing games. The Pen at Cooper Hewitt builds on NFC reading technology to enable personalised and individual interaction. The use of BLE enabled beacons across the Canadian Museum for Human Rights, supports a digital trail that is layered with narrative and augmented reality. 120 universal access points also provides improved visitor navigation and accessibility for sight-and hearingimpaired visitors. Sign language, for example, is available through a dedicated app.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> Abstract Museums are interested in the digitizing of their collections not only for the sake of preserving the cultural heritage, but to also make the information content accessible to the wider public in a manner that is attractive. Emerging technologies, such as VR, AR and Web3D are widely used to create virtual museum exhibitions both in a museum environment through informative kiosks and on the World Wide Web. This paper surveys the field, and while it explores the various kinds of virtual museums in existence, it discusses the advantages and limitation involved with a presentation of old and new methods and of the tools used for their creation. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> It is of paramount importance that cultural heritage professionals are directly involved in the design of digitally augmented experiences in their museum spaces. We propose an approach based on a catalogue of reusable narrative and interaction strategies with step-by-step instructions on how to adapt and instantiate them for a specific museum and type of visitors. This work is conducted in the context of the European-funded project meSch. <s> BIB003
Forms of geovisualisation, from floor guides to location points and thematic maps, are pervasive and essential features of the applications and services across the selected case studies. This underpins a primary characteristic of smart environments, that of location-awareness relating to the user, place, and surrounding objects at any one time. Geovisualisation also reinforces other visualisation technologies such as AR which is bound to a location point and wayfinding activities. 3D visualisation, including computer generated objects, figured prominently in those examples with AR applications and immersive environments, such as PureLand 360 and Ai WeiWei 360, offering rich and layered forms of information. The digital 3D models in the preservation and reconstruction examples, Rekrei and Zamani, highlight that the protection of heritage and culture must remain a high priority for all cultures. These online collections of 3D reconstructions representing endangered or destroyed artefacts, cultural landmarks and monuments bring new resonance to the role that "virtual museums" can play in terms of knowledge and wider accessibility of cultural heritage BIB001 ). The worldwide engagement of thousands of users supporting Rekrei's mission, in particular, also profiles the potential role of citizens in collectively protecting global cultural heritage, and that we do not need to be physically in the same place to participate in this goal. The pervasiveness of AR and/or AR elements in the selected projects supports the adoption growth of this technology in the cultural heritage sector as a popular visualization paradigm, arising from tourism applications BIB002 ) to educational and exhibition spaces (Cassella & Coelho 2013 , Garcia-Crespo 2016 BIB003 . Museums, galleries and other cultural organisations have been trialling AR systems for several years, such as in the example of ARCHEOGUIDE, and the National Science Museum in Tokyo, in which AR technology was used to overlay "flesh" onto the dinosaur skeletons on display ). The Skin & Bones AR app at the National Museum of Natural History has advanced this use, to bring dinosaur skeletons and fossils alive through a mix of AR, animation and gamification. The Museum has also provided the opportunity for children to use the AR app at home with a downloadable resource that simulates the museum experience. The potential of AR in outdoor settings is exemplified by the Museum of London's highly successful StreetMuseum app that has been available as a downloadable app for over five years. The ROM Ultimate Dinosaur exhibition brought dinosaurs to life in the city of Toronto at bus shelters and public spaces with signposted instructions on how to activate the AR experience. RecoVR Mosul and Ai WeiWei both use AR elements in that they use the real environment as a background with overlaid information on top. The applications are themselves accessible through web browsers as photorealistic 360° panoramas, but alternatively can be experienced as 3D immersions in virtual reality (VR). The potential intersections of VR and smart environments are yet to be explored further.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> This paper presents a system-level power management technique for energy savings of event-driven application. We present a new predictive system-shutdown method to exploit sleep mode operations for energy saving. We use an exponential-average approach to predict the upcoming idle period. We introduce two mechanisms, prediction-miss correction and prewake-up, to improve the hit ratio and to reduce the delay overhead. Experiments on four different event-driven applications show that our proposed method achieves high hit ratios in a wide range of delay overheads, which results in a high degree of energy with low delay penaties. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Corporate energy usage policy is typically difficult to design and impossible to enforce. The problem stems from the fact that there are several complexities in this enforcement and passive tools such as Energy star are naive; they do not cater for corporate policies. The result of this is an uncontrolled usage of computers in the corporate culture resulting in significant effects on the environment. This is in addition to an effect on the economy due to an increase in the corporate electricity bills. In this paper, we propose the use of a multiagent-based approach comprising of an intelligent self-organizing system managing the energy usage policy. For validation, using an agent-based model we simulate the proposed intelligent self-organizing architecture for monitoring corporate energy utilization. Extensive simulation experiments demonstrate the effectiveness of the proposed approach. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Energy consumption of the Information and Communication Technology (ICT) sector has grown exponentially in recent years. A major component of the today’s ICT is constituted by the data centers which have experienced an unprecedented growth in their size and population, recently. The Internet giants like Google, IBM and Microsoft house large data centers for cloud computing and application hosting. Many studies, on energy consumption of data centers, point out to the need to evolve strategies for energy efficiency. Due to large-scale carbon dioxide (\(\mathrm{CO}_2\)) emissions, in the process of electricity production, the ICT facilities are indirectly responsible for considerable amounts of green house gas emissions. Heat generated by these densely populated data centers needs large cooling units to keep temperatures within the operational range. These cooling units, obviously, escalate the total energy consumption and have their own carbon footprint. In this survey, we discuss various aspects of the energy efficiency in data centers with the added emphasis on its motivation for data centers. In addition, we discuss various research ideas, industry adopted techniques and the issues that need our immediate attention in the context of energy efficiency in data centers. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Interest has been growing in powering datacenters (at least partially) with renewable or "green" sources of energy, such as solar or wind. However, it is challenging to use these sources because, unlike the "brown" (carbon-intensive) energy drawn from the electrical grid, they are not always available. This means that energy demand and supply must be matched, if we are to take full advantage of the green energy to minimize brown energy consumption. In this paper, we investigate how to manage a datacenter's computational workload to match the green energy supply. In particular, we consider data-processing frameworks, in which many background computations can be delayed by a bounded amount of time. We propose GreenHadoop, a MapReduce framework for a datacenter powered by a photovoltaic solar array and the electrical grid (as a backup). GreenHadoop predicts the amount of solar energy that will be available in the near future, and schedules the MapReduce jobs to maximize the green energy consumption within the jobs' time bounds. If brown energy must be used to avoid time bound violations, GreenHadoop selects times when brown energy is cheap, while also managing the cost of peak brown power consumption. Our experimental results demonstrate that GreenHadoop can significantly increase green energy consumption and decrease electricity cost, compared to Hadoop. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> OpenStack is a massively scalable open source cloud operating system that is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. OpenStack provides series of interrelated projects delivering various components for a cloud infrastructure solution as well as controls large pools of storage, compute and networking resources throughout a datacenter that all managed through a dashboard(Horizon) that gives administrators control while empowering their users to provision resources through a web interface. In this paper, we present a comparative study of Cloud Computing Platform such as Eucalyptus, Openstack, CloudStack and Opennebula which is open source software, cloud computing layered model, components of OpenStack, architecture of OpenStack. Further discussing about how to install Openstack as well as how to build virtual machine (VM) in Openstack cloud using CLI on RHEL 6.4 and at last covering latest OpenStack releases Icehouse, which is used for building public, private, and hybrid clouds and introduce what new features added in Icehouse. The aim of this paper is to show mainly importance of OpenStack as a Cloud provider and give the best solution for service providers as well as enterprises. <s> BIB005
the concept of our society, i.e., immune system, human brain (neuron structure), ecosystem and human societies. Big data is providing numerous services and infrastructures to the companies and has opened new research directions in the field of computer science. Most of the current applications of the cloud computing uses distributed computing with varying degree of connectivity and interaction. Big data is providing computation and efficient processing to millions of users which has same complexity level just like CAS BIB001 . Apart from the complexity of the CAS, achieving energy efficiency in cloud computing and big data is a global challenge. There are plenty of methods which have been proposed by researchers to reduce power consumption in cloud and big data infrastructure. Most of the solution, proposes the powering off unused components. Other solutions are focused on optimal distribution of the data among different components BIB002 . Cloud computing provides numerous services to the users but poses certain challenges because of its complex nature. The devices used in the cloud are so large in number that complexity of such system is even more complex than human brain structure. However, apart from complexity, clouds also come across certain challenges like security and privacy of the data. Big data allow users to host data, access data and process data at any time. The volume of data is increasing with gigantic amount day by day, and no doubt the era of big data has arrived. Big data requires different management techniques to help communities (e.g., users) in performing their tasks quickly and efficiently. CAS helps in modeling user behaviors, which helps cloud provider to manage users efficiently. In order to have an energy efficient cloud infrastructure, we must understand the interaction between different components of the complex systems that consume power in order to meet energy requirements estimating power and performance trade off. Volume of data is increasing with amazing speed, i.e., 90 % of data available on big data is created in just last 2 years BIB001 . Facebook is also popular and processing data at high speed nearly 500 terabytes of data daily BIB005 . Large hadrons collider (LHC) computing grid is also contributing in vast data generation. Dozens of petabytes of data is produced daily and dissemination, transmission and processing is subject to consumption of huge amount of energy BIB003 . However, these data generators do not address how energy will be saved and used wisely to meet this ever increasing need of data. GreenHadoop is contributing toward energy efficiency using solar energy (Menon 2012). However, it comes across bottleneck when weather is cloudy for many days. Hadoop also uses different techniques, like map reduce which deals with how effectively a query will be answered it has no concern with energy efficiency. Big data is helping different companies in solving business problems with ease. Big data is utilizing hardware, software, algorithms and many related techniques to perform desired function, and utilizes standardized approaches to help users in performing their tasks with ease. Big data has always helped users by assuring that desired data is always accessible. However, the systems, servers, components and subsystems which are facilitating users, consumes enormous amount of energy. Big data is also servicing user with its unique features and at the same time it's facing variety of challenges. When we talk about volume of data, firstly, there might be an issue of data storage and secondly, privacy or integrity of the data is also a major concern. Users might be affected by viruses, Trojan horse and hackers. Another feature of big data is that information and data is always accessible to the user. On the other hand, user can come across the situation when data is not accessible due to poor network connection. Keeping in mind all issues, energy is another important concern which needs to be addressed. Due to increase in technology trends and growth in wired, wireless and mobile devices network, energy consumption has increased a lot. The increase in energy consumption has led to a huge demand for tools/techniques which could manage this growing demand of energy. Because of increase in the volume of data, more resources are required to hold data. Similarly, more energy is required as well to stabilize them . However, there exists no such technique that can efficiently address all energy consumption issues. Researchers and scientists are developing different techniques which aim to minimize energy consumption in big data. Energy consumption has always special concerns in cloud computing data centers where thousands of computers, servers, routers, switches and bridges are operating and consuming thousands kilowatt of energy. Stakeholders of cloud computing are thinking of efficient energy algorithms which reduce cost of energy BIB004 . Although there exist many surveys on the energy efficiency in big data but the existing research does not provide a thorough insight of energy efficiency in the context of big data and CASs. Our unique contribution is to provide energy utilization methods, techniques and algorithms for CAS. In this paper, we provide comprehensive evaluation of existing techniques in the form of tables (i.e., Tables 2, 3 , 4, 5, 6, 7), we provide extension and expansion of existing taxonomy of hardware based energy efficiency techniques as expressed in Fig. 5 . We estimate energy consumption per server class for year 2007 and onward in Table 2 . We provide component based taxonomy of energy efficient techniques in Table 1 . We examine big data in the context of complex adaptive systems and overview variety of services provided by cloud provider, challenges faced by cloud provider. We further identify hardware and software based techniques and approaches used for overcoming the energy demands of the cloud and outline different techniques and Finally, we present our findings about one of the best techniques for energy efficiency which has some limitations but is considered comparatively a better technique. The remainder of the paper is organized as follows: "Background" describes the background of the big data services, key challenges of big data and overview of the energy efficient techniques. "Critical analysis of existing surveys" provides the critical analysis of existing surveys. "Energy efficient techniques" details different techniques used in big data. We also provide the evaluation of each technique against certain parameters in the context of big data in this section. We provide our summary and findings in "Summary and findings". Some open issues with DVFS are elaborated in "Open issues". The paper is concluded in "Conclusion and future work" where the future directions are also elaborated.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures has been a tremendous task for the whole scientific computing community. We present DAGuE a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. Applications we consider can be represented as a Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented in a compact, problem-size independent format that can be queried on-demand to discover data dependencies, in a totally distributed fashion. DAGuE assigns computation threads to the cores, overlaps communications and computations and uses a dynamic, fully-distributed scheduler based on cache awareness, data-locality and task priority. We demonstrate the efficiency of our approach, using several micro-benchmarks to analyze the performance of different components of the framework, and a Linear Algebra factorization as a use case. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> MapReduce is a powerful paradigm that enables rapid implementation of a wide range of distributed data-intensive applications. The Hadoop project, its main open source implementation, has recently been widely adopted by the Cloud computing community. This paper aims to evaluate the cost of moving MapReduce applications to the Cloud, in order to find a proper trade-off between cost and performance for this class of applications. We provide a cost evaluation of running MapReduce applications in the Cloud, by looking into two aspects: the overhead implied by the execution of MapReduce jobs in the Cloud, compared to an execution on a Grid, and the actual costs of renting the corresponding Cloud resources. For our evaluation, we compared the runtime of 3 MapReduce applications executed with the Hadoop framework, in two environments: 1)on clusters belonging to the Grid’5000 experimental grid testbed and 2)in a Nimbus Cloud deployed on top of Grid’5000 nodes. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions. <s> BIB003
Efficient energy consumption has remained a concern for researchers and experts because too much energy consumption also results in depletion of natural resources, which in turn increase pollution and cause health hazards. According to a survey (Goiri 2012), there is 6 % increase in CO 2 emission from information technology (IT) sector which is also a great hazard for human health. In recent years, various organizations like IBM, Google and Microsoft have developed data centers in which thousands of machines are running and consuming large amount of energy. In order to cope up with this challenge of energy, different techniques are developed which minimize energy consumption in data centers. Dealing with energy efficiency is necessary, otherwise, in coming few years cost of energy will increase from the cost of hardware. In order to deal with this issue, different software and hardware based techniques have been proposed and deployed in data centers BIB001 . Energy consumed in big datacenter is computed by determining how much energy is consumed by each device when its operating. Efficient utilization of energy has drawn much more attention from cost and environment perspectives BIB003 . When lots of machines are operating in the cloud infrastructure, this results in emission of CO 2 . The use of the Internet, exchange of data over Internet, and the processing and analytical demand result in lots of energy consumption. Therefore, power consumption methodology, control, check and balance of power resources are necessary along with the expendability and accessibility of big data. Different models have also been proposed for energy efficiency but each comes across with different bottlenecks because of service level and configuration changes (Krauth 2006) . However, this issue is resolved to very much extent by modern service providing companies. It is believed that every algorithm has some pitfalls. If we talk about resource usage, control on carbon emission and policies specific domain are really challenging to build one common solution for all. Some new techniques, i.e., virtualization, sampling are also contributing towards energy efficiency like map reduce and intelligent power saving architecture (ISPA) BIB002 . Big data Services are numerous which are supporting companies functioning rigorously. Big data is helping users to perform their tasks with its unique quality of services. Big data support networking services which has helped companies to develop CRM (Customer Relationship Management) and extend services to the user with the help of remote access and without time constraint. We briefly and precisely present the overview of big data services, big data challenges and critical review of different energy efficiency techniques in the context of CAS in the following sections.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> In this paper we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load. The key component of our systems is an algorithm that makes load balancing and unbalancing decisions by considering both the total load imposed on the cluster and the power and performance implications of turning nodes off. The algorithm is implemented in two different ways: (1) at the application level for a cluster-based, localityconscious network server; and (2) at the operating system level for an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> The declining costs of commodity disk drives is rapidly changing the economics of deploying large amounts of online or near-line storage. Conventional mass storage systems use either high performance RAID clusters, automated tape libraries or a combination of tape and disk. In this paper, we analyze an alternative design using massive arrays of idle disks, or MAID. We argue that this storage organization provides storage densities matching or exceeding those of tape libraries with performance similar to disk arrays. Moreover, we show that with effective power management of individual drives, this performance can be achieved using a very small power budget. In particular, we show that our power management strategy can result in the performance comparable to an always-on RAID system while using 1/15th the power of such a RAID system. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Power management has become increasingly necessary in large-scale datacenters to address costs and limitations in cooling or power delivery. This paper explores how to integrate power management mechanisms and policies with the virtualization technologies being actively deployed in these environments. The goals of the proposed VirtualPower approach to online power management are (i) to support the isolated and independent operation assumed by guest virtual machines (VMs) running on virtualized platforms and (ii) to make it possible to control and globally coordinate the effects of the diverse power management policies applied by these VMs to virtualized resources. To attain these goals, VirtualPower extends to guest VMs `soft' versions of the hardware power states for which their policies are designed. The resulting technical challenge is to appropriately map VM-level updates made to soft power states to actual changes in the states or in the allocation of underlying virtualized hardware. An implementation of VirtualPower Management (VPM) for the Xen hypervisor addresses this challenge by provision of multiple system-level abstractions including VPM states, channels, mechanisms, and rules. Experimental evaluations on modern multicore platforms highlight resulting improvements in online power management capabilities, including minimization of power consumption with little or no performance penalties and the ability to throttle power consumption while still meeting application requirements. Finally, coordination of online methods for server consolidation with VPM management techniques in heterogeneous server systems is shown to provide up to 34% improvements in power consumption. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Hadoop Distributed File System (HDFS) presents unique challenges to the existing energy-conservation techniques and makes it hard to scale-down servers. We propose an energy-conserving, hybrid, logical multi-zoned variant of HDFS for managing data-processing intensive, commodity Hadoop cluster. Green HDFS's data-classification-driven data placement allows scale-down by guaranteeing substantially long periods (several days) of idleness in a subset of servers in the datacenter designated as the Cold Zone. These servers are then transitioned to high-energy-saving, inactive power modes. This is done without impacting the performance of the Hot zone as studies have shown that the servers in the data-intensive compute clusters are under-utilized and, hence, opportunities exist for better consolidation of the workload on the Hot Zone. Analysis of the traces of a Yahoo! Hadoop cluster showed significant heterogeneity in the data's access patterns which can be used to guide energy-aware data placement policies. The trace-driven simulation results with three-month-long real-life HDFS traces from a Hadoop cluster at Yahoo! show a 26% energy consumption reduction by doing only Cold zone power management. Analytical cost model projects savings of $14.6 million in 3-year total cost of ownership (TCO) and simulation results extrapolate savings of $2.4 million annually when Green-HDFS technique is applied across all Hadoop clusters (amounting to 38000 servers) at Yahoo. <s> BIB005 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Energy efficiency is increasingly important for future information and communication technologies (ICT), because the increased usage of ICT, together with increasing energy costs and the need to reduce green house gas emissions call for energy-efficient technologies that decrease the overall energy consumption of computation, storage and communications. Cloud computing has recently received considerable attention, as a promising approach for delivering ICT services by improving the utilization of data centre resources. In principle, cloud computing can be an inherently energy-efficient technology for ICT provided that its potential for significant energy savings that have so far focused on hardware aspects, can be fully explored with respect to system operation and networking aspects. Thus this paper, in the context of cloud computing, reviews the usage of methods and technologies currently used for energy-efficient operation of computer hardware and network infrastructure. After surveying some of the current best practice and relevant literature in this area, this paper identifies some of the remaining key research challenges that arise when such energy-saving techniques are extended for use in cloud computing environments. <s> BIB006 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Energy saving has become a crucial concern in datacenters as several reports predict that the anticipated energy costs over a three year period will exceed hardware acquisition. In particular, saving energy for storage is of major importance as storage devices (and cooling them off) may contribute over 25 percent of the total energy consumed in a datacenter. Recent work introduced the concept of energy proportionality and argued that it is a more relevant metric than just energy saving as it takes into account the tradeoff between energy consumption and performance. In this paper, we present a novel approach, called FREP (Fractional Replication for Energy Proportionality), for energy management in large datacenters. FREP includes a replication strategy and basic functions to enable flexible energy management. Specifically, our method provides performance guarantees by adaptively controlling the power states of a group of disks based on observed and predicted workloads. Our experiments using a set of real and synthetic traces show that FREP dramatically reduces energy requirements with a minimal response time penalty. <s> BIB007
Page 12 of 29 Majeed and Shah Complex Adapt Syst Model (2015) 3:6 makes the deployment a bit technical. Most of the data centers deployed variety of software techniques in combination with hardware techniques to achieve energy efficiency. Energy consumption which was surveyed in previous years in server class (W/Unit) from 2000 to 2006 is summarized in Table 2 (Valentini et al. 2011b) . With the increasing amount of the data, the energy consumption is increasing every day. Energy consumption is increasing with server classes as depicted in Table 2 . The use of certain techniques and approaches can reduce that power consumption. Beyond the services provided by the cloud, certain approaches need significant improvements. Power consumption tradeoff is not suitable in the big data environment. The development of the tools is not up to the mark and thus it does not simulate and model the behavior of the users over specific period of the time efficiently and accurately. The tools must be capable to model self-organization and other complex phenomena related to human life. Some of the cloud providers are unstructured, i.e., P2P system which requires applications and development of different tools to cater growing energy needs. Modern systems are unstructured and therefore algorithms like self-organized power consumption approximation algorithm (SOPCA), which are used to monitor power consumption of the different devices. Modern complex systems not only need to change ranges and other parameters but also need to model and simulate the behavior of the entities. Some tools have been developed to handle this task but these tools are very limited in scope. In order to get better understanding and accurate results, some tools like Net Logo and agent based toolkit have been proposed and used by researchers to model complexity of the CAS. One of the earlier works in which power management has been applied at the data center level has been done by BIB001 . In their work, the authors have proposed a technique for energy efficiency in heterogeneous cluster of nodes serving as web applications. The main contribution of this work was concentrating the workload of each node and switching idle nodes off. However, the load balancing and weak implementation of SLAs results in performance degradation. BIB004 have studied power management techniques in the context of virtualized data centers. The authors have introduced and applied a power management technique named "soft resource scaling". However, the adoption and implementation of this technique has not achieved required result because of guest operating systems which were legacy or power unaware. BIB003 have suggested putting network interfaces, links, switches and routers into sleep modes when they are idle in order to save the energy consumed by the Internet backbone and consumers. However, the adoption of such technique result in communication loss if necessary components are in sleep mode and power consumption at wake up of different devices. Disks design also contributes in energy efficiency, the authors BIB002 has presented the concept of MAID (massive arrays of idle disks), a technique which power off the unused disks when they are not in use. That is basically an array of disks spins which writes recently used data on cache disks. However, these cache always remain spin up and regular disks remains idle which in turn increases the energy consumption. BIB007 presented a novel approach, called FREP (Fractional Replication for Energy Proportionality), for energy management in big data. FREP includes a replication strategy and basic functions to enable flexible energy management according to the cloud needs, including load distribution and update consistency. However, the impact of the replication on the over storage cost of the system has not presented. BIB005 proposed an energy conserving hybrid multi zone variant of HDFS for intensive data processing, commodity Hadoop clusters. This variant has considerably improved energy efficiency up to 26 % in 3 months as a simulation run. This technique has cut the power budget to $14.6 million dollars. Different types of cloud infrastructures including traditional cloud and high performance computing (HPC), need to be enhanced to support dynamic power demands (i.e., adjust powers automatically), which in turn creates new challenges in designing architecture, infrastructure, and communications which are energy efficient and power aware resources. This concept was given by . A comprehensive survey about energy saving strategies in both network and computer system that has potential impact in saving energy of integrated systems is given by BIB006 . highlight the energy concerns while designing system, performance and energy efficient application development. They explained the goal of the computer system design shift to power and energy concerns. The authors carried out a detailed survey about the power consumption problems, different hardware and firm level techniques, how operating system contributes toward energy efficiency, and data center level technique of energy efficiency and importance of virtualization in data centers to achieve energy efficiency. The detailed survey also explains the power consumption at different levels in computing system in terms of electricity bills, power budget and Co 2 emission. DVFS has offered great reduction in energy consumption in cloud infrastructure by changing voltage and frequency according to workload. The implementation of such technique in the cloud has reduced the power consumption significantly. Most of the cloud has implemented this technique which is CPU level technique the most energy absorption component. DVFS has attained lots of attention from research community being adoptive and efficient. Complex adaptive system modeling and simulations are used to clearly communicate the facts about the complex systems nature. The entities interaction and co-ordination helps in understanding behaviors of the complex systems. To manage and meet energy needs in complex systems some of the approaches have been proposed and used by cloud providers. Intelligent self-organizing power-saving architecture (ISPA), which assists in identifying suitable idle computers intelligently, let the system shut down or hibernate automatically based on a uniform rule-based company-wide policy. This architecture results in minimum performance loss as compared to other techniques. The detailed description of the hardware and software based techniques is elaborated in the next section.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Parameter(s) Evaluation <s> Dynamic power management (DPM) is a design methodology for dynamically reconfiguring systems to provide the requested services and performance levels with a minimum number of active components or a minimum load on such components. DPM encompasses a set of techniques that achieves energy-efficient computation by selectively turning off (or reducing the performance of) system components when they are idle (or partially unexploited). In this paper, we survey several approaches to system-level dynamic power management. We first describe how systems employ power-manageable components and how the use of dynamic reconfiguration can impact the overall power consumption. We then analyze DPM implementation issues in electronic systems, and we survey recent initiatives in standardizing the hardware/software interface to enable software-controlled power management of hardware components. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Parameter(s) Evaluation <s> This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80~94% power reduction for typical real-time multimedia applications. <s> BIB002
Performance Over all Satisfactory in a small enterprise. Can be improved when next transition is already known or have a system model which determine transition interval in order to avoid from overhead of activation and deactivation Goal To achieve maximum energy efficiency and minimize energy consumption. Cost (in terms of man power) In terms of man power these techniques are hard to develop which require more efforts, advanced techniques and latest technology implication makes it more costly Switching cost Whenever switching is done which not only degrade performance but also increase energy consumption Figure 5 , summarizes all hardware techniques which are supporting energy efficiency. Hardware support is a key to achieve energy efficiency using algorithms, policies and software approaches. Hardware are properly evaluated and tested by reputed companies before deployment to achieve energy efficiency effectively. All those companies who are investing money to cope up with energy issue using hardware are benefiting more than those who are investing in software. Different software and hardware techniques and their implementation produce desired results. Recent advancements are remarkable which have enhanced big data popularity by all means, and delivering services to intended user in cost effective and desired way. Performance evaluation of desired technique is expressed in Table 3 with few important parameters which are used to assess its performance. DCD is further divided into various techniques, i.e., predictive and stochastic which contribute toward energy efficiency. If we talk about predictive techniques on the basis of prediction, decision will be made when to activate and deactivate the system components. Different policies exist, which ensures the correlation between active and inactive states. Energy is consumed when we let different components to wake up and go to sleep, which also hinders performance overhead and cause serious drawbacks. Predictive wakeup and predictive shutdown provide solution to above problem these are on the other hand provides best solution to deal with the above mentioned problem. However, certain issues related to intelligence are implemented in these mechanisms. Predictive shutdown policies address the issues of inactivity. According to the instance or situation of the predictive shutdown, historical data predicts the next idle period. These approaches involve decision making and are highly dependent on actual utilization of energy and the strength of co-relation between previous and next events. History predictors are energy efficient but they are not as safer as timeouts which works on predictions BIB001 . However, predictions are not supportive in many situations. Predictive wakeup techniques aim to reduce the energy which is consumed on activation. Meanwhile, most of the components require lots of energy at wakeup. The transition from active to inactive state is computed on the basis of some previous record, and sometime on the requirement of the user (Albers 2010). In these techniques, energy consumption is high but minimum performance overheads are there on wakeup. Performance evaluation of fixed timeout, predictive shutdown and predictive wakeup is expressed in Table 4 . The accuracy of such techniques is determined in terms of the complexity, performance, maintenance, costs and energy efficiency. In the above section, the related concepts to SPM and sub techniques have been explored. All these techniques are static. In order to deal with problem of intelligently determining idle components, the adoptive techniques have been developed. Prediction about next transition is inefficient when workload is not determined in advance. Several practical techniques have been discussed in the literature which mainly focuses on energy efficiency . SPM considers architecture of the RAMs and CPU and related components. SPM is specially designed to control the internal structure of CPU, including circuits, chips structure of buses and ports. SPM uses intelligent approaches to determine the transition and sequences of inactive and active states. Cost (in terms of money) The development and implementation of these techniques increase cost in terms of man power and in the terms of money The development and implementation of these techniques increase cost in terms of man power and in the terms of money The development and implementation of these techniques increase cost in terms of man power and in the terms of money Performance DVFS provide good performance, it reduces the instruction that processor issue in a particular instance of time therefore which results in power reduction Energy efficiency Provide good energy efficiency if workload is known or dividing task and assigning different frequency. But still it's better from static power management V and P relationship As equation suggest that relationship between power and voltage is quadratic However, it might not be quadratic sometime linear and sometime nonlinear depend on interactions Complexity DVFS architecture is much complex and sometime structure of the system also increase its complexity Cost (in terms of money) The implementation of the same logic on chip is required huge efforts. Due to the technicalities involved it is costly Maintenance At each instruction, the CPU frequencies need to be adjusted so it's hard to operate and improvement and enhancement is not always easy Response time RT consists of non-linearity but it's executed fast so it provide better Response Time Sometime program execution is independent to cpu, I/O bound processes executed without CPU involvement During powernap the implementation was tested on different systems and their transitions were determined and comparisons have been made at different states. Different conclusion have been drawn based on the assumption that if switching time is less than 10 ms or equal than 10 ms power savings are approximately smooth and linear and are more than DVFS. However, in ideal situation the transition time is 300 ms. desired requirements are hard to meet but if authors have determined the mechanism for transition time then average server power can be reduced to 74 % BIB002 . Performance evaluation of powernap is provided in Table 6 where we compare it on the basis of certain parameters such as complexity and cost etc.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> Mobile computers typically spin down their hard disk after a fixed period of inactivity. If this threshold is too long, the disk wastes energy; if it is too short, the delay due to spinning the disk up again frushates the user. Usage patterns change over time, so a single fixed threshold may not be appropriate at all times. Also, different users may have varying pri- orities with respect to trading off energy conservation against performance. We describe a method for vary- ing the spin-down threshold dynamically by adapting to the user's access patterns and priorities. Adaptive spin-down can in some circumstances reduce by up to 507o the number of disk spin-ups that are deemed by the user to be inconvenient, while only moderately increasing energy consumption. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80~94% power reduction for typical real-time multimedia applications. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> Scalability of the core frequency is a common feature of low-power processor architectures. Many heuristics for frequency scaling were proposed in the past to find the best trade-off between energy efficiency and computational performance. With complex applications exhibiting unpredictable behavior these heuristics cannot reliably adjust the operation point of the hardware because they do not know where the energy is spent and why the performance is lost.Embedded hardware monitors in the form of event counters have proven to offer valuable information in the field of performance analysis. We will demonstrate that counter values can also reveal the power-specific characteristics of a thread.In this paper we propose an energy-aware scheduling policy for non-real-time operating systems that benefits from event counters. By exploiting the information from these counters, the scheduler determines the appropriate clock frequency for each individual thread running in a time-sharing environment. A recurrent analysis of the thread-specific energy and performance profile allows an adjustment of the frequency to the behavioral changes of the application. While the clock frequency may vary in a wide range, the application performance should only suffer slightly (e.g. with 10% performance loss compared to the execution at the highest clock speed). Because of the similarity to a car cruise control, we called our scheduling policy Process Cruise Control. This adaptive clock scaling is accomplished by the operating system without any application support.Process Cruise Control has been implemented on the Intel XScale architecture, that offers a variety of frequencies and a set of configurable event counters. Energy measurements of the target architecture under variable load show the advantage of the proposed approach. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> This work examines fundamental tradeoffs incurred by a speed scaler seeking to minimize the sum of expected response time and energy use per job. We prove that a popular speed scaler is 2-competitive for this objective and no "natural" speed scaler can do better. Additionally, we prove that energy-proportional speed scaling works well for both Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS) and we show that under both SRPT and PS, gated-static speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness under SRPT but that PS remains fair under speed scaling. These results show that these speed scalers can achieve any two, but only two, of optimality, fairness, and robustness. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> MapReduce workloads have evolved to include increasing amounts of time-sensitive, interactive data analysis; we refer to such workloads as MapReduce with Interactive Analysis (MIA). Such workloads run on large clusters, whose size and cost make energy efficiency a critical concern. Prior works on MapReduce energy efficiency have not yet considered this workload class. Increasing hardware utilization helps improve efficiency, but is challenging to achieve for MIA workloads. These concerns lead us to develop BEEMR (Berkeley Energy Efficient MapReduce), an energy efficient MapReduce workload manager motivated by empirical analysis of real-life MIA traces at Facebook. The key insight is that although MIA clusters host huge data volumes, the interactive jobs operate on a small fraction of the data, and thus can be served by a small pool of dedicated machines; the less time-sensitive jobs can run on the rest of the cluster in a batch fashion. BEEMR achieves 40-50% energy savings under tight design constraints, and represents a first step towards improving energy efficiency for an increasingly important class of datacenter workloads. <s> BIB005
DVFS contributes well in energy efficiency especially in the cloud environment. CPU frequencies need proper adjustment but frequency adjustment requires voltage scaling as well. Both these parameters need adjustments collectively in order to contribute towards energy efficiency. Sometime increase in voltage causes increase in temperature which in turn increases energy consumptions. DVFS minimizes the number of instructions that can be issued by the CPU in a particular instance of time, which results in the reduced performance. This in turn increases performance overhead especially for CPU bound processes. Researchers and designers are exploring the same issue from several years but are unable to provide optimal solution. General formula used for voltage and frequency calculation and related parameter details is expressed in ) which given as: DVFS looks straightforward but implementation is not so easy. The structure of real system has imposed certain technicalities on the DVFS. Production of desired frequency to meet application performance is also tricky. However, the authors are not sure about power consumed by processor its quadratic, linear or non-linear to voltage supplied BIB005 . Several approaches have been practiced that reduce energy consumption. This energy consumption can be categorized as interval based; intra-task based and inter-task based (Hwang and Wu 2000) . Interval based technique is same as adaptive technique which predicts the CPU cycles and transitioning is done in various orders. Inter-task approach dynamically distinguishes between processes based on their execution time and assign them a different CPU speed (Hwang and Wu 2000; BIB001 . However, this can cause an issue when different scheduling algorithms are applied because execution time using round robin (RR) scheduling algorithm will be different than first come first served (FCFS) algorithm. Voltage and frequencies can be best adjusted if we know the workload in advance, or its constant throughout the execution. In comparison with inter-task, intra-task approach provides fine grained information about the structure of the programs and tune the processor voltage and frequency in the tasks effectively (Buttazzo 2002; BIB004 BIB003 . Performance evaluation of DFVS is provided in Table 5. DVFS is always concerned with energy saving from its efficient energy scheduling method. It saves energy when peak performance of any component is not required. It also adjusts CPU cycles, when CPU is not doing useful work, i.e., reading data from DVFS scheduling is one of the best technique, which contribute toward energy efficiency. DVFS uses A2E which makes it different from all other techniques available for energy efficiency. It scales up and down voltage and frequency so well that performance is not hindered. DVFS uses simple method to save energy which is high enough to keep servers on all the time. However, for most data intensive solutions it may not be suitable option because these applications mostly use read/write operation. It compete all other techniques which are available for energy saving with minimum performance compromises. This is adoptive and scheduling is runtime which is a key to success. This is the reason DVFS is mostly used by companies who are crowd king of big data BIB002 . Dynamic voltage and frequency scaling is deployed in many data centers to fulfill the energy needs. The devices needs to be built with service oriented and energy oriented architecture. The performance evaluation of the DVFS is provided in Table 5 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> Engineered systems are often built of recurring circuit modules that carry out key functions. Transcription networks that regulate the responses of living cells were recently found to obey similar principles: they contain several biochemical wiring patterns, termed network motifs, which recur throughout the network. One of these motifs is the feed-forward loop (FFL). The FFL, a three-gene pattern, is composed of two input transcription factors, one of which regulates the other, both jointly regulating a target gene. The FFL has eight possible structural types, because each of the three interactions in the FFL can be activating or repressing. Here, we theoretically analyze the functions of these eight structural types. We find that four of the FFL types, termed incoherent FFLs, act as sign-sensitive accelerators: they speed up the response time of the target gene expression following stimulus steps in one direction (e.g., off to on) but not in the other direction (on to off). The other four types, coherent FFLs, act as sign-sensitive delays. We find that some FFL types appear in transcription network databases much more frequently than others. In some cases, the rare FFL types have reduced functionality (responding to only one of their two input stimuli), which may partially explain why they are selected against. Additional features, such as pulse generation and cooperativity, are discussed. This study defines the function of one of the most significant recurring circuit elements in transcription networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> BackgroundThere has been tremendous interest in the study of biological network structure. An array of measurements has been conceived to assess the topological properties of these networks. In this study, we compared the metabolic network structures of eleven single cell organisms representing the three domains of life using these measurements, hoping to find out whether the intrinsic network design principle(s), reflected by these measurements, are different among species in the three domains of life.ResultsThree groups of topological properties were used in this study: network indices, degree distribution measures and motif profile measure. All of which are higher-level topological properties except for the marginal degree distribution. Metabolic networks in Archaeal species are found to be different from those in S. cerevisiae and the six Bacterial species in almost all measured higher-level topological properties. Our findings also indicate that the metabolic network in Archaeal species is similar to the exponential random network.ConclusionIf these metabolic network properties of the organisms studied can be extended to other species in their respective domains (which is likely), then the design principle(s) of Archaea are fundamentally different from those of Bacteria and Eukaryote. Furthermore, the functional mechanisms of Archaeal metabolic networks revealed in this study differentiate significantly from those of Bacterial and Eukaryotic organisms, which warrant further investigation. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> The first comprehensive book on the emerging field of network science, Network Science: Theory and Applications is an exhaustive review of terms, ideas, and practices in the various areas of network science. In addition to introducing theory and application in easy-to-understand, topical chapters, this book describes the historical evolution of network science through the use of illustrations, tables, practice problems with solutions, case studies, and applications to related Java software. Researchers, professionals, and technicians in engineering, computing, and biology will benefit from this overview of new concepts in network science. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> Professor Barabási's talk described how the tools of network science can help understand the Web's structure, development and weaknesses. The Web is an information network, in which the nodes are documents (at the time of writing over one trillion of them), connected by links. Other well-known network structures include the Internet, a physical network where the nodes are routers and the links are physical connections, and organizations, where the nodes are people and the links represent communications. <s> BIB004
Networks (or graphs) are a very flexible and powerful way of modeling many real-world systems. In its essence, they capture the interactions of a system, by representing entities as nodes and their relations as edges connecting them (e.g., people are nodes in social networks and edges connect those that have some relationship between them, such as friendships or citations). Networks have thus been used to analyze all kinds of social, biological and communication processes . Extracting information from networks is therefore a vital interdisciplinary task that has been emerging as a research area by itself, commonly known as Network Science BIB004 BIB003 . One very common and important methodology is to look at the networks from a subgraph perspective, identifying the characteristic and recurrent connection patterns. For instance, network motif analysis has identified the feed-forward loop as a recurring and crucial functional pattern in many real biological networks, such as gene regulation and metabolic networks BIB001 BIB002 ]. Another example is the usage of graphlet-degree distributions to show that protein-protein interaction networks are more akin to geometric graphs than with traditional scale-free models . At the heart of these topologically rich approaches lies the subgraph counting problem, that is, the ability to compute subgraph frequencies. However, this is a very hard computational task. In fact, determining if one subgraph exists at all in another larger network (i.e., subgraph isomorphism ) is an NP-Complete problem . Determining the exact frequency is even harder, and millions or even billions of subgraph occurrences are typically found even in relatively small networks. Given both its usefulness and hard tractability, subgraph counting has been raising a considerable amount of interest from the research community, with a large body of published literature. This survey aims precisely to organize and summarize these research results, providing a comprehensive overview of the field. Our main contributions are the following: • A comprehensive review of algorithms for exact subgraph counting. We give a structured historical perspective on algorithms for computing exact subgraph frequencies. We provide a complete overview table in which we employ a taxonomy that allows to classify all algorithms on a set of key characteristics, highlighting their main similarities and differences. We also identify and describe the main conceptual ideas, giving insight on their main advantages and possible limitations. We also provide links to existing implementations, exposing which approaches are readily available. • A comprehensive review of algorithms for approximate subgraph counting. Given the hardness of the problem, many authors have resorted to approximation schemes, which allow trading some accuracy for faster execution times. As on the exact case, we provide historical context, links to implementations and we give a classification and description of key properties, explaining how the existing approaches deal with the balance between precision and running time. • A comprehensive review of parallel subgraph counting methodologies. It is only natural that researchers have tried to harness the power of parallel architectures to provide scalable approaches that might decrease the needed computation time. As before, we provide an historical overview, coupled with classification on a set of important aspects, such as the type of parallel platform or availability of an implementation. We also give particular attention to how the methodologies tackle the unbalanced nature of the search space. We complement this journey trough the algorithmic strategies with a clear formal definition of the subgraph counting problem being discussed here, an overview of its applications and complete and a large number of references to related work that is not directly in the scope of this article. We believe that this survey provides the reader with an insightful and complete perspective on the field, both from a methodological and an application point of view. The remainder of this paper is structured as follows. Section 2 presents necessary terminology, formally describes subgraph counting, and describes possible applications related subgraph counting. Section 3 reviews exact algorithms, divided between full enumeration and analytical methods. Approximate algorithms are described in Section 4 and parallel strategies are presented in Section 5. Finally, in Section 6 we give our concluding remarks.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> Network motifs, patterns of local interconnections with potential functional properties, are important for the analysis of biological networks. To analyse motifs in networks the first step is to find patterns of interest. This paper presents 1) three different concepts for the determination of pattern frequency and 2) an algorithm to compute these frequencies. The different concepts of pattern frequency depend on the reuse of network elements. The presented algorithm finds all or highly frequent patterns under consideration of these concepts. The utility of this method is demonstrated by applying it to biological data. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> Motivation: Small-induced subgraphs called graphlets are emerging as a possible tool for exploration of global and local structure of networks and for analysis of roles of individual nodes. One of the obstacles to their wider use is the computational complexity of algorithms for their discovery and counting. Results: We propose a new combinatorial method for counting graphlets and orbit signatures of network nodes. The algorithm builds a system of equations that connect counts of orbits from graphlets with up to five nodes, which allows to compute all orbit counts by enumerating just a single one. This reduces its practical time complexity in sparse graphs by an order of magnitude as compared with the existing pure enumeration-based algorithms. Availability and implementation: Source code is available freely at http://www.biolab.si/supp/orca/orca.html. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> The complexity of the subgraph isomorphism problem where the pattern graph is of fixed size is well known to depend on the topology of the pattern graph. Here, we present two results which, in contrast, provide evidence that no topology of an induced subgraph of fixed size can be substantially easier to detect or count than an independent set of related size.We show that any fixed pattern graph having a maximum independent set of size k that is disjoint from other maximum independent sets is not easier to detect as an induced subgraph than an independent set of size k. It follows in particular that an induced path on 2 k - 1 vertices is not easier to detect than an independent set on k vertices, and that an induced cycle on 2k vertices is not easier to detect than an independent set on k vertices. In view of linear time upper bounds on the detection of induced path of length two and three, our lower bound is tight. Similar corollaries hold for the detection of induced complete bipartite graphs and an induced paw and its generalizations.We show also that for an arbitrary pattern graph H on k vertices with no isolated vertices, there is a simple subdivision of H, resulting from splitting each edge into a path of length four and attaching a distinct path of length three at each vertex of degree one, that is not easier to detect or count than an independent set on k vertices, respectively.Next, we show that the so-called diamond and its generalizations on k vertices are not easier to detect as induced subgraphs than an independent set on three vertices or an independent set on k vertices, respectively. For C 4 , we give a weaker evidence of its hardness in terms of an independent set on three vertices.Finally, we derive several results relating the complexity of the edge-colored variant of induced subgraph isomorphism to that of the standard variant. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> BACKGROUND ::: Biological networks provide great potential to understand how cells function. Network motifs, frequent topological patterns, are key structures through which biological networks operate. Finding motifs in biological networks remains to be computationally challenging task as the size of the motif and the underlying network grow. Often, different copies of a given motif topology in a network share nodes or edges. Counting such overlapping copies introduces significant problems in motif identification. ::: ::: ::: RESULTS ::: In this paper, we develop a scalable algorithm for finding network motifs. Unlike most of the existing studies, our algorithm counts independent copies of each motif topology. We introduce a set of small patterns and prove that we can construct any larger pattern by joining those patterns iteratively. By iteratively joining already identified motifs with those patterns, our algorithm avoids (i) constructing topologies which do not exist in the target network (ii) repeatedly counting the frequency of the motifs generated in subsequent iterations. Our experiments on real and synthetic networks demonstrate that our method is significantly faster and more accurate than the existing methods including SUBDUE and FSG. ::: ::: ::: CONCLUSIONS ::: We conclude that our method for finding network motifs is scalable and computationally feasible for large motif sizes and a broad range of networks with different sizes and densities. We proved that any motif with four or more edges can be constructed as a join of the small patterns. <s> BIB004
Making use of previous concepts and terminology, we now give a more formal definition of the problem tackled by this survey: . Given a set G of non-isomorphic subgraphs and a graph G, determine the frequency of all induced matches of the subgraphs G s ∈ G in G. Two occurrences are considered different if they have at least one node or edge that they do not share. This problem is also known as subgraph census. In short, one wants to extract the occurrences of all subgraphs of a given size, or just a smaller set of "interesting" subgraphs, contained in a large graph G. Note how here the input is a single graph, in contrast with Frequent Subgraph Mining (FSM) where collections of graphs are more commonly used (differences between Subgraph Counting and FSM are discussed in Section 2.4.5). Approaches diverge on which subgraphs are counted in G. Network-centric methods extract all k−node occurrences in G and then assess each occurrence's isomorphic type. On the other end of the spectrum, subgraph-centric methods first pick a isomorphic class and then only count occurrences matching that class in G. Therefore, subgraph-centric methods are preferable to network-centric algorithms when only one or a few different subgraphs are to be counted. Set-centric approaches are middle-ground algorithms that take as input a set of interesting subgraphs and only count those on G. This work is mainly focused on network-centric algorithms, while not limited to them, since: (a) exploring all subgraphs offers the most information possible when applying subgraph counting to a real dataset, (b) hand-picking a set of interesting subgraphs might might be hard or impossible and could be heavily dependent on our knowledge of the dataset, (c) it is intrinsically the most general approach. It is obviously possible to use subgraph-centric methods to count all isomorphic classes, simply by executing the method once per isomorphic type. However, that option is only feasible for small subgraph sizes because larger k values produce too many subgraphs (see Table 1 ) and it is likely that a network only has a small subset of them, meaning that the method would spend a considerable amount of time looking for features that do not exist, while network-centric methods always do useful work since they count occurrences in the network. Here we are mainly interested in algorithms that count induced subgraphs, but non-induced subgraphs counting algorithms are also considered. Counting one or the other is equivalent since it is possible to obtain induced occurrences from non-induced occurrences, and vice-versa. However, we should note that, at the end of the counting process, induced occurrences need to be obtained by the algorithm. This choice penalizes non-induced subgraph counting algorithms since the transformation is quadratic on the number of subgraphs BIB003 . Some algorithms count orbits instead of subgraphs BIB002 . However, counting orbits can be reduced to counting subgraphs and, therefore, these algorithms are also considered. We should note that we only consider the most common and well studied subgraph frequency definition, in which different occurrences might share a partial subset of nodes and edges, but there are other possible frequency concepts, in which this overlap is explicitly disallowed BIB004 BIB001 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both node-weighted and edge-weighted graphs. Our results include: The number of copies of an H with an independent set of size s can be computed exactly in O*(2s nk-s+3) time. A minimum weight copy of such an H (with arbitrary real weights on nodes and edges) can be found in O(4s+o(s) nk-s+3) time. (The O* notation omits (k) factors.) These algorithms rely on fast algorithms for computing the permanent of a k x n matrix, over rings and semirings. The number of copies of any H having minimum (or maximum) node-weight (with arbitrary real weights on nodes) can be found in O(nω k/3 + n2k/3+o(1)) time, where ω < 2.4 is the matrix multiplication exponent and k is divisible by 3. Similar results hold for other values of k. Also, the number of copies having exactly a prescribed weight can be found within this time. These algorithms extend the technique of Czumaj and Lingas (SODA 2007) and give a new (algorithmic) application of multiparty communication complexity. Finding an edge-weighted triangle of weight exactly 0 in general graphs requires Ω(n2.5-ε) time for all ε > 0, unless the 3SUM problem on N numbers can be solved in O(N2 - ε) time. This suggests that the edge-weighted problem is much harder than its node-weighted version. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360---368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575---586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653---664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> A great variety of systems in nature, society and technology -- from the web of sexual contacts to the Internet, from the nervous system to power grids -- can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Given a multiset of colors as the query and a list-colored graph, i.e., an undirected graph with a set of colors assigned to each of its vertices, in the NP-hard list-colored graph motif problem the goal is to find the largest connected subgraph such that one can select a color from the set of colors assigned to each of its vertices to obtain a subset of the query. This problem was introduced to find functional motifs in biological networks. We present a branch-and-bound algorithm named RANGI for finding and enumerating list-colored graph motifs. As our experimental results show, RANGI's pruning methods and heuristics make it quite fast in practice compared to the algorithms presented in the literature. We also present a parallel version of RANGI that achieves acceptable scalability. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Network motifs are small over represented patterns that have been used successfully to characterize complex networks. Current algorithmic approaches focus essentially on pure topology and disregard node and edge nature. However, it is often the case that nodes and edges can also be classified and separated into different classes. This kind of networks can be modeled by colored (or labeled) graphs. Here we present a definition of colored motifs and an algorithm for efficiently discovering them.We use g-tries, a specialized data-structure created for finding sets of subgraphs. G-Tries encapsulate common sub-structure, and with the aid of symmetry breaking conditions and a customized canonization methodology, we are able to efficiently search for several colored patterns at the same time. We apply our algorithm to a set of representative complex networks, showing that it can find colored motifs and outperform previous methods. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We tackle the problem of counting the number qk of k-cliques in large-scale graphs, for any constant k ≥ 3. Clique counting is essential in a variety of applications, including social network analysis. Our algorithms make it possible to compute qk for several real-world graphs and shed light on its growth rate as a function of k. Even for small values of k, the number qk of k-cliques can be in the order of tens or hundreds of trillions. As k increases, different graph instances show different behaviors: while on some graphs qk + 1 Due to the computationally intensive nature of the clique counting problem, we settle for parallel solutions in the MapReduce framework, which has become in the last few years a de facto standard for batch processing of massive datasets. We give both theoretical and experimental contributions. On the theory side, we design the first exact scalable algorithm for counting (and listing) k-cliques in MapReduce. Our algorithm uses O(m3/2) total space and O(mk/2) work, where m is the number of graph edges. This matches the best-known bounds for triangle listing when k e 3 and is work optimal in the worst case for any k, while keeping the communication cost independent of k. We also design sampling-based estimators that can dramatically reduce the running time and space requirements of the exact approach, while providing very accurate solutions with high probability. We then assess the effectiveness of different clique counting approaches through an extensive experimental analysis over the Amazon EC2 platform, considering both our algorithms and their state-of-the-art competitors. The experimental results clearly highlight the algorithm of choice in different scenarios and prove our exact approach to be the most effective when the number of k-cliques is large, gracefully scaling to nontrivial values of k even on clusters of small/medium size. Our approximation algorithms achieve extremely accurate estimates and large speedups, especially on the toughest instances for the exact algorithms. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> With the growing amount of available temporal real-world network data, an important question is how to efficiently study these data. One can simply model a temporal network as either a single aggregate static network, or as a series of time-specific snapshots, each of which is an aggregate static network over the corresponding time window. The advantage of modeling the temporal data in these two ways is that one can use existing well established methods for static network analysis to study the resulting aggregate network(s). Here, we develop a novel approach for studying temporal network data more explicitly. We base our methodology on the well established notion of graphlets (subgraphs), which have been successfully used in numerous contexts in static network research. Here, we take the notion of static graphlets to the next level and develop new theory needed to allow for graphlet-based analysis of temporal networks. Our new notion of dynamic graphlets is quite different than existing approaches for dynamic network analysis that are based on temporal motifs (statistically significant subgraphs). Namely, these approaches suffer from many limitations. For example, they can only deal with subgraph structures of limited complexity. Also, their major drawback is that their results heavily depend on the choice of a null network model that is required to evaluate the significance of a subgraph. However, choosing an appropriate null network model is a non-trivial task. Our dynamic graphlet approach overcomes the limitations of the existing temporal motif-based approaches. At the same time, when we thoroughly evaluate the ability of our new approach to characterize the structure and function of an entire temporal network or of individual nodes, we find that the dynamic graphlet approach outperforms the static graphlet approach, which indicates that accounting for temporal information helps. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Determining the occurrence of motifs yields profound insight for many biological systems, like metabolic, protein-protein interaction, and protein structure networks. Meaningful spatial protein-structure motifs include enzyme active sites and ligand-binding sites which are essential for function, shape, and performance of an enzyme. Analyzing their dynamics over time leads to a better understanding of underlying properties and processes. In this work, we present StreaM, a stream-based algorithm for counting undirected 4-vertex motifs in dynamic graphs. We evaluate StreaM against the four predominant approaches from the current state of the art on generated and real-world datasets, a simulation of a highly dynamic enzyme. For this case, we show that StreaM is capable to capture essential molecular protein dynamics and thereby provides a powerful method for evaluating large molecular dynamics trajectories. Compared to related work, our approach achieves speedups of upi¾?to 2,i¾?300 times on real-world datasets. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We study the problem of estimating the value of sums of the form \(S_p \triangleq \sum \left( {\begin{array}{c}x_i\\ p\end{array}}\right) \) when one has the ability to sample \(x_i \ge 0\) with probability proportional to its magnitude. When \(p=2\), this problem is equivalent to estimating the selectivity of a self-join query in database systems when one can sample rows randomly. We also study the special case when \(\{x_i\}\) is the degree sequence of a graph, which corresponds to counting the number of p-stars in a graph when one has the ability to sample edges randomly. Our algorithm for a \((1 \pm \varepsilon )\)-multiplicative approximation of \(S_p\) has query and time complexities \(\mathrm{O}\left( \frac{m \log \log n}{\epsilon ^2 S_p^{1/p}}\right) \). Here, \(m=\sum x_i/2\) is the number of edges in the graph, or equivalently, half the number of records in the database table. Similarly, n is the number of vertices in the graph and the number of unique values in the database table. We also provide tight lower bounds (up to polylogarithmic factors) in almost all cases, even when \(\{x_i\}\) is a degree sequence and one is allowed to use the structure of the graph to try to get a better estimate. We are not aware of any prior lower bounds on the problem of join selectivity estimation. For the graph problem, prior work which assumed the ability to sample only vertices uniformly gave algorithms with matching lower bounds (Gonen et al. in SIAM J Comput 25:1365–1411, 2011). With the ability to sample edges randomly, we show that one can achieve faster algorithms for approximating the number of star subgraphs, bypassing the lower bounds in this prior work. For example, in the regime where \(S_p\le n\), and \(p=2\), our upper bound is \(\tilde{O}(n/S_p^{1/2})\), in contrast to their \(\varOmega (n/S_p^{1/3})\) lower bound when no random edge queries are available. In addition, we consider the problem of counting the number of directed paths of length two when the graph is directed. This problem is equivalent to estimating the selectivity of a join query between two distinct tables. We prove that the general version of this problem cannot be solved in sublinear time. However, when the ratio between in-degree and out-degree is bounded—or equivalently, when the ratio between the number of occurrences of values in the two columns being joined is bounded—we give a sublinear time algorithm via a reduction to the undirected case. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> In recent years, graphlet counting has emerged as an important task in topological graph analysis. However, the existing works on graphlet counting obtain the graphlet counts for the entire network as a whole. These works capture the key graphical patterns that prevail in a given network but they fail to meet the demand of the majority of real-life graph related prediction tasks such as link prediction, edge/node classification, etc., which require to build features for an edge (or a vertex) of a network. To meet the demand for such applications, efficient algorithms are needed for counting local graphlets within the context of an edge (or a vertex). In this work, we propose an efficient method, titled E-CLOG, for counting all 3,4 and 5 size local graphlets with the context of a given edge for its all different edge orbits. We also provide a shared-memory, multi-core implementation of E-CLOG, which makes it even more scalable for very large real-world networks. In particular, We obtain strong scaling on a variety of graphs (14x-20x on 36 cores). We provide extensive experimental results to demonstrate the efficiency and effectiveness of the proposed method. For instance, we show that E-CLOG is faster than existing work by multiple order of magnitudes; for the Wordnet graph E-CLOG counts all 3,4 and 5-size local graphlets in 1.5 hours using a single thread and in only a few minutes using the parallel implementation, whereas the baseline method does not finish in more than 4 days. We also show that local graphlet counts around an edge are much better features for link prediction than well-known topological features; our experiments show that the former enjoys between 10% to 45% of improvement in the AUC value for predicting future links in three real-life social and collaboration networks. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Networks are a fundamental tool for modeling complex systems in a variety of domains including social and communication networks as well as biology and neuroscience. The counts of small subgraph patterns in networks, called network motifs, are crucial to understanding the structure and function of these systems. However, the role of network motifs for temporal networks, which contain many timestamped links between nodes, is not well understood. Here we develop a notion of a temporal network motif as an elementary unit of temporal networks and provide a general methodology for counting such motifs. We define temporal network motifs as induced subgraphs on sequences of edges, design several fast algorithms for counting temporal network motifs, and prove their runtime complexity. We also show that our fast algorithms achieve 1.3x to 56.5x speedups compared to a baseline method. We use our algorithms to count temporal network motifs in a variety of real-world datasets. Results show that networks from different domains have significantly different motif frequencies, whereas networks from the same domain tend to have similar motif frequencies. We also find that measuring motif counts at various time scales reveals different behavior. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> In order to detect network motifs we need to evaluate the exceptionality of subgraphs in a given network. This is usually done by comparing subgraph frequencies on both the original and an ensemble of random networks keeping certain structural properties. The classical null model implies preserving the degree sequence. In this paper our focus is on a richer model that approximately fixes the frequency of subgraphs of size \(K - 1\) to compute motifs of size K. We propose a method for generating random graphs under this model, and we provide algorithms for its efficient computation. We show empirical results of our proposed methodology on neurobiological networks, showcasing its efficiency and its differences when comparing to the traditional null model. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Motivated by recent studies in the data mining community, we develop the most efficient parallel algorithm for listing all k-cliques in a graph. Our theoretical analysis shows that our algorithm boasts the best asymptotic upper bound on the running time for the case when the input graph is sparse. Our experimental evaluation on large real-world graphs demonstrates that our parallel algorithm is faster than state-of-the-art algorithms, while boasting an excellent degree of parallelism. In particular, we are able to list all k-cliques (for any value of k) in graphs containing up to tens of millions of edges as well as all 10-cliques in graphs containing billions of edges, within a few minutes and a few hours respectively. We show how it can be employed as an effective subroutine for finding the k-clique core decomposition and an approximate k-clique densest subgraphs in very large real-world graphs. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> The frequency of small subtrees in biological, social, and other types of networks could shed light into the structure, function, and evolution of such networks. However, counting all possible subtrees of a prescribed size can be computationally expensive because of their potentially large number even in small, sparse networks. Moreover, most of the existing algorithms for subtree counting belong to the subtree-centric approaches, which search for a specific single subtree type at a time, potentially taking more time by searching again on the same network. In this paper, we propose a network-centric algorithm (MTMO) to efficiently count k-size subtrees. Our algorithm is based on the enumeration of all connected sets of k–1 edges, incorporates a labeled rooted tree data structure in the enumeration process to reduce the number of isomorphism tests required, and uses an array-based indexing scheme to simplify the subtree counting method. The experiments on three representative undirected complex networks show that our algorithm is roughly an order of magnitude faster than existing subtree-centric approaches and base network-centric algorithm which does not use rooted tree, allowing for counting larger subtrees in larger networks than previously possible. We also show major differences between unicellular and multicellular organisms. In addition, our algorithm is applied to find network motifs based on pattern growth approach. A network-centric algorithm which allows for a faster counting of non-induced subtrees is proposed. This enables us to count larger motif in larger networks than previously. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We consider the problem of counting motifs in bipartite affiliation networks, such as author-paper, user-product, and actor-movie relations. We focus on counting the number of occurrences of a "butterfly", a complete 2x2 biclique, the simplest cohesive higher-order structure in a bipartite graph. Our main contribution is a suite of randomized algorithms that can quickly approximate the number of butterflies in a graph with a provable guarantee on accuracy. An experimental evaluation on large real-world networks shows that our algorithms return accurate estimates within a few seconds, even for networks with trillions of butterflies and hundreds of millions of edges. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Network alignment (NA) compares networks with the goal of finding a node mapping that uncovers highly similar (conserved) network regions. Existing NA methods are homogeneous, i.e., they can deal only with networks containing nodes and edges of one type. Due to increasing amounts of heterogeneous network data with nodes or edges of different types, we extend three recent state-of-the-art homogeneous NA methods, WAVE, MAGNA++, and SANA, to allow for heterogeneous NA for the first time. We introduce several algorithmic novelties. Namely, these existing methods compute homogeneous graphlet-based node similarities and then find high-scoring alignments with respect to these similarities, while simultaneously maximizing the amount of conserved edges. Instead, we extend homogeneous graphlets to their heterogeneous counterparts, which we then use to develop a new measure of heterogeneous node similarity. Also, we extend $S^3$, a state-of-the-art measure of edge conservation for homogeneous NA, to its heterogeneous counterpart. Then, we find high-scoring alignments with respect to our heterogeneous node similarity and edge conservation measures. In evaluations on synthetic and real-world biological networks, our proposed heterogeneous NA methods lead to higher-quality alignments and better robustness to noise in the data than their homogeneous counterparts. The software and data from this work is available upon request. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Motif discovery is the problem of finding subgraphs of a network that appear surprisingly often. Each such subgraph may indicate a small-scale interaction feature in applications ranging from a genomic interaction network, a significant relationship involving rock musicians, or any other application that can be represented as a network. We look at the problem of constrained search for motifs based on labels (e.g. gene ontology, musician type to continue our example from above). This chapter presents a brief review of the state of the art in motif finding and then extends the gTrie data structure from Ribeiro and Silva (Data Min Knowl Discov 28(2):337–377, 2014b) to support labels. Experiments validate the usefulness of our structure for small subgraphs, showing that we recoup the cost of the index after only a handful of queries. <s> BIB017 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Given a set of temporal networks, from different domains and with different sizes, how can we compare them? Can we identify evolutionary patterns that are both (i) characteristic and (ii) meaningful? We address these challenges by introducing a novel temporal and topological network fingerprint named Graphlet-orbit Transitions (GoT). We demonstrate that GoT provides very rich and interpretable network characterizations. Our work puts forward an extension of graphlets and uses the notion of orbits to encapsulate the roles of nodes in each subgraph. We build a transition matrix that keeps track of the temporal trajectory of nodes in terms of their orbits, therefore describing their evolution. We also introduce a metric (OTA) to compare two networks when considering these matrices. Our experiments show that networks representing similar systems have characteristic orbit transitions. GoT correctly groups synthetic networks pertaining to well-known graph models more accurately than competing static and dynamic state-of-the-art approaches by over 30%. Furthermore, our tests on real-world networks show that GoT produces highly interpretable results, which we use to provide insight into characteristic orbit transitions. <s> BIB018 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> MOTIVATION ::: Graphlets are small network patterns that can be counted in order to characterise the structure of a network (topology). As part of a topology optimisation process, one could use graphlet counts to iteratively modify a network and keep track of the graphlet counts, in order to achieve certain topological properties. Up until now, however, graphlets were not suited as a metric for performing topology optimisation; when millions of minor changes are made to the network structure it becomes computationally intractable to recalculate all the graphlet counts for each of the edge modifications. ::: ::: ::: RESULTS ::: IncGraph is a method for calculating the differences in graphlet counts with respect to the network in its previous state, which is much more efficient than calculating the graphlet occurrences from scratch at every edge modification made. In comparison to static counting approaches, our findings show IncGraph reduces the execution time by several orders of magnitude. The usefulness of this approach was demonstrated by developing a graphlet-based metric to optimise gene regulatory networks. IncGraph is able to quickly quantify the topological impact of small changes to a network, which opens novel research opportunities to study changes in topologies in evolving or online networks, or develop graphlet-based criteria for topology optimisation. ::: ::: ::: AVAILABILITY ::: IncGraph is freely available as an open-source R package on CRAN (incgraph). The development version is also available on GitHub (rcannood/incgraph). <s> BIB019 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Subgraph counting is a fundamental primitive in graph processing, with applications in social network analysis (e.g., estimating the clustering coefficient of a graph), database processing and other areas. The space complexity of subgraph counting has been studied extensively in the literature, but many natural settings are still not well understood. In this paper we revisit the subgraph (and hypergraph) counting problem in the sketching model, where the algorithm's state as it processes a stream of updates to the graph is a linear function of the stream. This model has recently received a lot of attention in the literature, and has become a standard model for solving dynamic graph streaming problems. In this paper we give a tight bound on the sketching complexity of counting the number of occurrences of a small subgraph $H$ in a bounded degree graph $G$ presented as a stream of edge updates. Specifically, we show that the space complexity of the problem is governed by the fractional vertex cover number of the graph $H$. Our subgraph counting algorithm implements a natural vertex sampling approach, with sampling probabilities governed by the vertex cover of $H$. Our main technical contribution lies in a new set of Fourier analytic tools that we develop to analyze multiplayer communication protocols in the simultaneous communication model, allowing us to prove a tight lower bound. We believe that our techniques are likely to find applications in other settings. Besides giving tight bounds for all graphs $H$, both our algorithm and lower bounds extend to the hypergraph setting, albeit with some loss in space complexity. <s> BIB020 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Many real-world applications give rise to large heterogeneous networks where nodes and edges can be of any arbitrary type (e.g., user, web page, location). Special cases of such heterogeneous graphs include homogeneous graphs, bipartite, k-partite, signed, labeled graphs, among many others. In this work, we generalize the notion of network motifs to heterogeneous networks. In particular, small induced typed subgraphs called typed graphlets (heterogeneous network motifs) are introduced and shown to be the fundamental building blocks of complex heterogeneous networks. Typed graphlets are a powerful generalization of the notion of graphlet (network motif) to heterogeneous networks as they capture both the induced subgraph of interest and the types associated with the nodes in the induced subgraph. To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks. We discover the existence of non-trivial combinatorial relationships between lower-order ($k-1$)-node typed graphlets and leverage them for deriving many of the $k$-node typed graphlets in $o(1)$ constant time. Thus, we avoid explicit enumeration of those typed graphlets. Notably, the time complexity matches the best untyped graphlet counting algorithm. The experiments demonstrate the effectiveness of the proposed framework in terms of runtime, space-efficiency, parallel speedup, and scalability as it is able to handle large-scale networks. <s> BIB021 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> This paper proposes novel algorithms for efficiently counting complex network motifs in dynamic networks that are changing over time. Network motifs are small characteristic configurations of a few nodes and edges, and have repeatedly been shown to provide insightful information for understanding the meso-level structure of a network. Here, we deal with counting more complex temporal motifs in large-scale networks that may consist of millions of nodes and edges. The first contribution is an efficient approach to count temporal motifs in multilayer networks and networks with partial timing, two prevalent aspects of many real-world complex networks. We analyze the complexity of these algorithms and empirically validate their performance on a number of real-world user communication networks extracted from online knowledge exchange platforms. Among other things, we find that the multilayer aspects provide significant insights in how complex user interaction patterns differ substantially between online platforms. The second contribution is an analysis of the viability of motif counting algorithms for motifs that are larger than the triad motifs studied in previous work. We provide a novel categorization of motifs of size four, and determine how and at what computational cost these motifs can still be counted efficiently. In doing so, we delineate the “computational frontier” of temporal motif counting algorithms. <s> BIB022 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Biological networks provide great potential to understand how cells function. Motifs are topological patterns which are repeated frequently in a specific network. Network motifs are key structures through which biological networks operate. However, counting independent (i.e., non-overlapping) instances of a specific motif remains to be a computationally hard problem. Motif counting problem becomes computationally even harder for biological networks as biological interactions are uncertain events. The main challenge behind this problem is that different embeddings of a given motif in a network can share edges. Such edges can create complex computational dependencies between different instances of the given motif when considering uncertainty of those edges. In this paper, we develop a novel algorithm for counting independent instances of a specific motif topology in probabilistic biological networks. We present a novel mathematical model to capture the dependency between each embedding and all the other embeddings, which it overlaps with. We prove the correctness of this model. We evaluate our model on real and synthetic networks with different probability, and topology models as well as reasonable range of network sizes. Our results demonstrate that our method counts non-overlapping embeddings in practical time for a broad range of networks. <s> BIB023
In this work we focus on practical algorithms that are capable of counting all subgraphs of a given size. Therefore, algorithms that only target specific subgraphs are not considered (e.g., triads , cliques BIB009 BIB006 , stars BIB013 or subtrees BIB014 ). Furthermore, given our focus on generalizability, we do not consider algorithms that are only capable of counting sugraphs in specific graphs (e.g., bipartite networks BIB015 , trees ), or that only count local subgraphs BIB010 . Graphs used throughout this work are simple, have a single layer of connectivity and do not distinguish the node or edge types with qualitative or quantitative features. Therefore we do not discuss here algorithms that use colored nodes or edges BIB004 BIB002 BIB005 , and neither those that consider networks that are heterogeneous BIB016 BIB021 , multilayer BIB022 , labelled/attributed BIB017 , probabilistic BIB023 or any kind of weighted graphs BIB001 . Finally, the networks we consider are static and do not change their topology. We should however note that there has been an increasing interest in temporal networks, that evolve over time BIB003 . Some algorithms beyond the scope of this survey try to tackle temporal subgraph counting, either by considering temporal networks as a series of static snapshots BIB018 BIB007 , by timestamping edges BIB011 , or by considering a stream of small updates to the graph topology BIB019 BIB020 BIB008 BIB012 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> We report the current state of the graph isomorphism problem from the practical point of view. After describing the general principles of the refinement-individualization paradigm and proving its validity, we explain how it is implemented in several of the key programs. In particular, we bring the description of the best known program nauty up to date and describe an innovative approach called Traces that outperforms the competitors for many difficult graph classes. Detailed comparisons against saucy, Bliss and conauto are presented. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> Determining the frequency of small subgraphs is an important computational task lying at the core of several graph mining methodologies, such as network motifs discovery or graphlet based measurements. In this paper we try to improve a class of algorithms available for this purpose, namely network-centric algorithms, which are based upon the enumeration of all sets of k connected nodes. Past approaches would essentially delay isomorphism tests until they had a finalized set of k nodes. In this paper we show how isomorphism testing can be done during the actual enumeration. We use a customized g-trie, a tree data structure, in order to encapsulate the topological information of the embedded subgraphs, identifying already known node permutations of the same subgraph type. With this we avoid redundancy and the need of an isomorphism test for each subgraph occurrence. We tested our algorithm, which we called FaSE, on a set of different real complex networks, both directed and undirected, showcasing that we indeed achieve significant speedups of at least one order of magnitude against past algorithms, paving the way for a faster network-centric approach. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB004
2.4.1 Subgraph Isomorphism. Given two graphs G and H , the subgraph isomorphism problem is the computational task of determining if G contains a subgraph isomorphic to H . Although efficient solutions might be found for specific graph types (e.g., linear solutions exist for planar graphs BIB001 ), this is a known NP-Complete problem for general graphs , and can be seen as much simpler version of counting, that is, determining if the number of occurrences is bigger than zero. This task is closely related to the graph isomorphism problem [107, BIB002 , that is, the task of determining if two given graphs are isomorphic. Since many subgraph counting approaches rely on finding the subgraphs contained in a large graph and then checking to what isomorphic class the subgraphs found belong to, subgraph isomorphism can be seen as an integral part of them. The well known and very fast nauty tool is used by several subgraph counting algorithms to assess the type of the subgraph found BIB003 BIB004 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> This chapter is part of a continuing research series and reports work that is collaborative in every respect. The order of our names on this and our previous reports is alphabetical. National Science Foundation Grants GS-39778 to Carnegie-Mellon University and GJ-1 154X2 to the National Bureau of Economic Research, Inc., provided financial support. We are grateful to James A. Davis, J. Richard Dietrich, and Christopher Winship for aid in conducting this research and to Richard Hill for computer programing. This chapter was written when Paul Holland was with the Computer Research Center for Economics and Management Science of the National Bureau of Economic Research, Inc. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Triadic structure is an important, but neglected, aspect of interfirm networks. We developed the constructs clustering and countering as potential drivers of triadic structure and combined them with the recently developed p* network model to demonstrate the value and feasibility of triadic analysis. Exploratory analysis of data from the global steel industry revealed firms' tendency to form transitive triads, in which three firms all have direct ties with each other, especially within blocks defined by geography or technology. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Modularity is known to be one of the most relevant characteristics of biological systems and appears to be present at multiple scales. Given its adaptive potential, it is often assumed to be the target of selective pressures. Under such interpretation, selection would be actively favouring the formation of modular structures, which would specialize in different functions. Here we show that, within the context of cellular networks, no such selection pressure is needed to obtain modularity. Instead, the intrinsic dynamics of network growth by duplication and diversification is able to generate it for free and explain the statistical features exhibited by small subgraphs. The implications for the evolution and evolvability of both biological and technological systems are discussed. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Dyad and triad census summarize much of the network level structural information of a given directed network. They have been found very useful in analyzing structural properties of social networks. This study aims to explore crisis communication network by following dyad and triad census analysis approach to investigate the association of microlevel communication patterns with organizational crisis. This study further tests hypothesis related to the process of data generation and tendency of the structural pattern of transitivity using dyad and triad census output. The changing communication network at Enron Corporation during the period of its crisis is analyzed in this study. Significant differences in the presence of different isomorphism classes or microlevel patterns of both dyad and triad census are noticed in crisis and non-crisis period network of Enron email corpus. It is also noticed that crisis communication network shows more transitivity compared to the non-crisis communication network. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> The social role of a participant in a social system conceptualizes the circumstances under which she chooses to interact with others, making their discovery and analysis important for theoretical and practical purposes. In this paper, we propose a methodology to detect such roles by utilizing the conditional triad censuses of ego-networks. These censuses are a promising tool for social role extraction because they capture the degree to which basic social forces push upon a user to interact with others in a system. Clusters of triad censuses, inferred from network samples that preserve local structural properties, define the social roles. The approach is demonstrated on two large online interaction networks. <s> BIB005
Frequencies. The small patterns found in large graphs can offer insights about the networks. By considering the frequency of all k-subgraphs, we have a very powerful and rich feature vector that characterizes the network. There has been a long tradition on using the triad census on the analysis of social networks , and they have been used as early as in the 70s to describe local structure BIB001 . Examples of applications in this field include studying social capital features such as brokerage and closure , discovering social roles BIB005 , seeing the effect of individual psychological differences on network structure or characterizing communication BIB004 and social networks . Given the ubiquity of graphs, these frequencies have also been used on many other domains, such as in biological BIB003 , transportation or interfirm networks BIB002 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex brains have evolved a highly efficient network architecture whose structural connectivity is capable of generating a large repertoire of functional states. We detect characteristic network building blocks (structural and functional motifs) in neuroanatomical data sets and identify a small set of structural motifs that occur in significantly increased numbers. Our analysis suggests the hypothesis that brain networks maximize both the number and the diversity of functional motifs, while the repertoire of structural motifs remains small. Using functional motif number as a cost function in an optimization algorithm, we obtain network topologies that resemble real brain networks across a broad spectrum of structural measures, including small-world attributes. These results are consistent with the hypothesis that highly evolved neural architectures are organized to maximize functional repertoires and to support highly efficient integration of information. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Genes and proteins generate molecular circuitry that enables the cell to process information and respond to stimuli. A major challenge is to identify characteristic patterns in this network of interactions that may shed light on basic cellular mechanisms. Previous studies have analyzed aspects of this network, concentrating on either transcription-regulation or protein-protein interactions. Here we search for composite network motifs: characteristic network patterns consisting of both transcription-regulation and protein-protein interactions that recur significantly more often than in random networks. To this end we developed algorithms for detecting motifs in networks with two or more types of interactions and applied them to an integrated data set of protein-protein interactions and transcription regulation in Saccharomyces cerevisiae. We found a two-protein mixed-feedback loop motif, five types of three-protein motifs exhibiting coregulation and complex formation, and many motifs involving four proteins. Virtually all four-protein motifs consisted of combinations of smaller motifs. This study presents a basic framework for detecting the building blocks of networks with multiple types of interactions. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Summary: Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns that occur much more frequently than in randomized networks with the same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively enumerating all subgraphs with a given number of nodes in the network. The runtime of such algorithms increases strongly with network size. Here, we present a novel algorithm that allows estimation of subgraph concentrations and detection of network motifs at a runtime that is asymptotically independent of the network size. This algorithm is based on random sampling of subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in larger networks than was previously possible with exhaustive enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions. ::: ::: Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 1.1) and further information is available at http://www.weizmann.ac.il/mcb/UriAlon/ <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> There are two common approaches to food webs. On the one hand, empirical studies have described aggregate statistical measures of many-species food webs. On the other hand, theoretical studies have explored the dynamic properties of simple tri-trophic food chains (i.e., trophic modules). The question remains to what extent results based on simple modules are relevant for whole food webs. Here we bridge between these two independent research agendas by exploring the relative frequency of different trophic modules in the five most resolved food webs. While apparent competition and intraguild predation are overrepresented when compared to a suite of null models, the frequency of omnivory highly varies across communities. Inferences about the representation of modules may also depend on the null model used for statistical significance. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex networks in both nature and technology have been shown to display characteristic, small subgraphs so-called motifs which appear to be related to their underlying functionality. All these networks share a common trait: they manipulate information at different scales in order to perform some kind of computation. Here we analyze a large set of software class diagrams and show that several highly frequent network motifs appear to be a consequence of network heterogeneity and size, thus suggesting a somewhat less relevant role of functionality. However, by using a simple model of network growth by duplication and rewiring, it is shown the rules of graph evolution seem to be largely responsible for the observed motif distribution. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Getting and analyzing biological interaction networks is at the core of systems biology. To help understanding these complex networks, many recent works have suggested to focus on motifs which occur more frequently than expected in random. To identify such exceptional motifs in a given network, we propose a statistical and analytical method which does not require any simulation. For this, we first provide an analytical expression of the mean and variance of the count under any exchangeable random graph model. Then we approximate the motif count distribution by a compound Poisson distribution whose parameters are derived from the mean and variance of the count. Thanks to simulations, we show that the compound Poisson approximation outperforms the Gaussian approximation. The compound Poisson distribution can then be used to get an approximate p-value and to decide if an observed count is significantly high or not. Our methodology is applied on protein-protein interaction (PPI) networks, and statistical issues related to exceptional motif detection are discussed. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Various methods have been recently employed to characterise the structure of biological networks. In particular, the concept of network motif and the related one of coloured motif have proven useful to model the notion of a functional/evolutionary building block. However, algorithms that enumerate all the motifs of a network may produce a very large output, and methods to decide which motifs should be selected for downstream analysis are needed. A widely used method is to assess if the motif is exceptional, that is, over- or under-represented with respect to a null hypothesis. Much effort has been put in the last thirty years to derive -values for the frequencies of topological motifs, that is, fixed subgraphs. They rely either on (compound) Poisson and Gaussian approximations for the motif count distribution in Erdös-Rényi random graphs or on simulations in other models. We focus on a different definition of graph motifs that corresponds to coloured motifs. A coloured motif is a connected subgraph with fixed vertex colours but unspecified topology. Our work is the first analytical attempt to assess the exceptionality of coloured motifs in networks without any simulation. We first establish analytical formulae for the mean and the variance of the count of a coloured motif in an Erdös-Rényi random graph model. Using simulations under this model, we further show that a Pólya-Aeppli distribution better approximates the distribution of the motif count compared to Gaussian or Poisson distributions. The Pólya-Aeppli distribution, and more generally the compound Poisson distributions, are indeed well designed to model counts of clumping events. Altogether, these results enable to derive a -value for a coloured motif, without spending time on simulations. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/ <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motifs are statistically overrepresented sub-structures (sub-graphs) in a network, and have been recognized as ‘the simple building blocks of complex networks’. Study of biological network motifs may reveal answers to many important biological questions. The main difficulty in detecting larger network motifs in biological networks lies in the facts that the number of possible sub-graphs increases exponentially with the network or motif size (node counts, in general), and that no known polynomial-time algorithm exists in deciding if two graphs are topologically equivalent. This article discusses the biological significance of network motifs, the motivation behind solving the motif-finding problem, and strategies to solve the various aspects of this problem. A simple classification scheme is designed to analyze the strengths and weaknesses of several existing algorithms. Experimental results derived from a few comparative studies in the literature are discussed, with conclusions that lead to future research directions. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> We study complex networks in which the nodes are tagged with different colors depending on their function (colored graphs), using information theory applied to the distribution of motifs in such networks. We find that colored motifs can be viewed as the building blocks of the networks (much more than the uncolored structural motifs can be) and that the relative frequency with which these motifs appear in the network can be used to define its information content. This information is defined in such a way that a network with random coloration (but keeping the relative number of nodes with different colors the same) has zero color information content. Thus, colored motif information captures the exceptionality of coloring in the motifs that is maintained via selection. We study the motif information content of the C. elegans brain as well as the evolution of colored motif information in networks that reflect the interaction between instructions in genomes of digital life organisms. While we find that colored motif information appears to capture essential functionality in the C. elegans brain (where the color assignment of nodes is straightforward), it is not obvious whether the colored motif information content always increases during evolution, as would be expected from a measure that captures network complexity. For a single choice of color assignment of instructions in the digital life form Avida, we find rather that colored motif information content increases or decreases during evolution, depending on how the genomes are organized, and therefore could be an interesting tool to dissect genomic rearrangements. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Comparing scientific production across different fields of knowledge is commonly controversial and subject to disagreement. Such comparisons are often based on quantitative indicators, such as papers per researcher, and data normalization is very difficult to accomplish. Different approaches can provide new insight and in this paper we focus on the comparison of different scientific fields based on their research collaboration networks. We use co-authorship networks where nodes are researchers and the edges show the existing co-authorship relations between them. Our comparison methodology is based on network motifs, which are over represented patterns, or sub graphs. We derive motif fingerprints for 22 scientific fields based on 29 different small motifs found in the corresponding co-authorship networks. These fingerprints provide a metric for assessing similarity among scientific fields, and our analysis shows that the discrimination power of the 29 motif types is not identical. We use a co-authorship dataset built from over 15,361 publications inducing a co-authorship network with over 32,842 researchers. Our results also show that we can group different fields according to their fingerprints, supporting the notion that some fields present higher similarity and can be more easily compared. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> A motif in a network is a connected graph that occurs significantly more frequently as an induced subgraph than would be expected in a similar randomized network. By virtue of being atypical, it is thought that motifs might play a more important role than arbitrary subgraphs. Recently, a flurry of advances in the study of network motifs has created demand for faster computational means for identifying motifs in increasingly larger networks. Motif detection is typically performed by enumerating subgraphs in an input network and in an ensemble of comparison networks; this poses a significant computational problem. Classifying the subgraphs encountered, for instance, is typically performed using a graph canonical labeling package, such as Nauty, and will typically be called billions of times. In this article, we describe an implementation of a network motif detection package, which we call NetMODE. NetMODE can only perform motif detection for -node subgraphs when , but does so without the use of Nauty. To avoid using Nauty, NetMODE has an initial pretreatment phase, where -node graph data is stored in memory (). For we take a novel approach, which relates to the Reconstruction Conjecture for directed graphs. We find that NetMODE can perform up to around times faster than its predecessors when and up to around times faster when (the exact improvement varies considerably). NetMODE also (a) includes a method for generating comparison graphs uniformly at random, (b) can interface with external packages (e.g. R), and (c) can utilize multi-core architectures. NetMODE is available from netmode.sf.net. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Unexpectedly frequent sub graphs, known as motifs, can help in characterizing the structure of complex networks. Most of the existing methods for finding motifs are designed for unweighted networks, where only the existence of connection between nodes is considered, and not their strength or capacity. However, in many real world networks, edges contain more information than just simple node connectivity. In this paper, we propose a new method to incorporate edge weight information in motif mining. We think of a motif as a sub graph that contains unexpected information, and we define a new significance measurement to assess this sub graph exceptionality. The proposed metric embeds the weight distribution in sub graphs and it is based on weight entropy. We use the g-trie data structure to find instances of $k$-sized sub graphs and to calculate its significance score. Following a statistical approach, the random entropy of sub graphs is then calculated, avoiding the time consuming step of random network generation. The discrimination power of the derived motif profile by the proposed method is assessed against the results of the traditional unweighted motifs through a graph classification problem. We use a set of labeled ego networks of co-authorship in the biology and mathematics fields, The new proposed method is shown to be feasible, achieving even slightly better accuracy. Furthermore, we are able to be quicker by not having to generate random networks, and we are able to use the weight information in computing the motif importance, avoiding the need for converting weighted networks into unweighted ones. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex networks facilitate the understanding of natural and man-made processes and are classified based on the concepts they model: biological, technological, social or semantic. The relevant subgraphs in these networks, called network motifs, are demonstrated to show core aspects of network functionality. They are used to classify complex networks based on that functionality. We propose a novel approach of classifying complex networks based on their topological aspects using motifs. We define the classifiers for regular, random, small-world and scale-free topologies, as well as apply this classification on empirical networks. The study brings a new perspective on how we can classify and differentiate online social networks like Facebook, Twitter and Google Plus based on the distribution of network motifs over the fundamental network topology classes. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Many real world networks contain a statistically surprising number of certain subgraphs, called network motifs. In the prevalent approach to motif analysis, network motifs are detected by comparing subgraph frequencies in the original network with a statistical null model. In this paper we propose an alternative approach to motif analysis where network motifs are defined to be connectivity patterns that occur in a subgraph cover that represents the network using minimal total information. A subgraph cover is defined to be a set of subgraphs such that every edge of the graph is contained in at least one of the subgraphs in the cover. Some recently introduced random graph models that can incorporate significant densities of motifs have natural formulations in terms of subgraph covers and the presented approach can be used to match networks with such models. To prove the practical value of our approach we also present a heuristic for the resulting NP-hard optimization problem and give results for several real world networks. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motif discovery is the problem of finding subgraphs of a network that occur more frequently than expected, according to some reasonable null hypothesis. Such subgraphs may indicate small scale interaction features in genomic interaction networks or intriguing relationships involving actors or a relationship among airlines. When nodes are labeled, they can carry information such as the genomic entity under study or the dominant genre of an actor. For that reason, labeled subgraphs convey information beyond structure and could therefore enjoy more applications. To identify statistically significant motifs in a given network, we propose an analytical method (i.e. simulation-free) that extends the works of Picard et al. (J Comput Biol 15(1):1---20, 2008) and Schbath et al. (J Bioinform Syst Biol 2009(1):616234, 2009) to label-dependent scale-free graph models. We provide an analytical expression of the mean and variance of the count under the Expected Degree Distribution random graph model. Our model deals with both induced and non-induced motifs. We have tested our methodology on a wide set of graphs ranging from protein---protein interaction networks to movie networks. The analytical model is a fast (usually faster by orders of magnitude) alternative to simulation. This advantage increases as graphs grow in size. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> We introduce a new learning method for network motifs: interesting or informative subgraph patterns in a network. Current methods for finding motifs rely on the frequency of the motif: specifically, subgraphs are motifs when their frequency in the data is high compared to the expected frequency under a null model. To compute this expectation, the search for motifs is normally repeated on as many as 1000 random graphs sampled from the null model, a prohibitively expensive step. We use ideas from the Minimum Description Length (MDL) literature to define a new measure of motif relevance. This has several advantages: the subgraph count on samples from the null model can be eliminated, and the search for motif candidates within the data itself can be greatly simplified. Our method allows motif analysis to scale to networks with billions of links, provided that a fast null model is used. <s> BIB017 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Software homology plays an important role in intellectual property protection, malware analysis, and network attack traceback. Among many methods proposed by researchers, the structure-based method has been proved to have better detection and anti-obfuscation capabilities, but it is inefficiency on space-time complexity and difficult to be applied to large-scale software homology analysis. In this paper, we propose a parallel method to extract function call graph from source codes, and a new software structure information comparison algorithm. The approach transforms function call graph into the corresponding motifs as the features of the software, and calculates homology score by the algorithm which is quick and accurate for large-scale software based on software motifs. According to experiments on large-scale source codes, binary executable files and obfuscated software, the accuracy of homology detection is 90.00% for non-obfuscated software and 80.00% for obfuscated software. <s> BIB018 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Co-regulatory networks, which consist of transcription factors (TFs), micro ribose nucleic acids (miRNAs), and target genes, have provided new insight into biological processes, revealing complicated and comprehensive regulatory relationships between biomolecules. To uncover the key co-regulatory mechanisms between these biomolecules, the identification of co-regulatory motifs has become beneficial. However, due to high-computational complexity, it is a hard task to identify co-regulatory network motifs with more than four interacting nodes in large-scale co-regulatory networks. To overcome this limitation, we propose an efficient algorithm, named large co-regulatory network motif (LCNM), to detect large co-regulatory network motifs. This algorithm is able to store a set of co-regulatory network motifs within a $G$ -tries structure. Moreover, we propose two ways to generate candidate motifs. For three- or four-interacting-node motifs, LCNM is able to generate all different types of motif through an enumeration method. For larger network motifs, we adopt a sampling method to generate candidate co-regulatory motifs. The experimental results demonstrate that LCNM cannot only improve the computational performance in exhaustive identification of all of the three- or four-node motifs but can also identify co-regulatory network motifs with a maximum of eight nodes. In addition, we implement a parallel version of our LCNM algorithm to further accelerate the motif detection process. <s> BIB019 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Networks are powerful representation of topological features in biological systems like protein interaction and gene regulation. In order to understand the design principles of such complex networks, the concept of network motifs emerged. Network motifs are recurrent patterns with statistical significance that can be seen as basic building blocks of complex networks. Identification of network motifs leads to many important applications, such as understanding the modularity and the large-scale structure of biological networks, classification of networks into super-families, protein function annotation, etc. However, identification of network motifs is challenging as it involves graph isomorphism which is computationally hard. Though this problem has been studied extensively in the literature using different computational approaches, we are far from satisfactory results. Motivated by the challenges involved in this field, an efficient and scalable network Motif Discovery algorithm based on Expansion Tree (MODET) is proposed. Pattern growth approach is used in this proposed motif-centric algorithm. Each node of the expansion tree represents a non-isomorphic pattern. The embeddings corresponding to a child node of the expansion tree are obtained from the embeddings of the parent node through vertex addition and edge addition. Further, the proposed algorithm does not involve any graph isomorphism check and the time complexities of these processes are <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:math> , respectively. The proposed algorithm has been tested on Protein-Protein Interaction (PPI) network obtained from the MINT database. The computational efficiency of the proposed algorithm outperforms most of the existing network motif discovery algorithms. <s> BIB020 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> For scale-free networks with degrees following a power law with an exponent $\tau\in(2,3)$, the structures of motifs (small subgraphs) are not yet well understood. We introduce a method designed to identify the dominant structure of any given motif as the solution of an optimization problem. The unique optimizer describes the degrees of the vertices that together span the most likely motif, resulting in explicit asymptotic formulas for the motif count and its fluctuations. We then classify all motifs into two categories: motifs with small and large fluctuations. <s> BIB021 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motifs provide an enlightening insight into uncovering the structural design principles of complex networks across multifarious disciplines, such as physics, biology, social science, engineering, and military science. Measures for network motifs play an indispensable role in the procedures of motif measurement and evaluation which are crucial steps in motif detection, counting, and clustering. However, there is a relatively small body of literature concerned with measures for network motifs. In this paper, we review the measures for network motifs in two categories: structural measures and statistical measures. The application scenarios for each measure and the distinctions of measures in similar scenarios are also summarized. We also conclude the challenges for using these measures and put forward some future directions on this topic. Overall, the objective of this survey is to provide an overview of motif measures, which is anticipated to shed light on the theory and practice of complex networks. <s> BIB022
A subgraph is considered a network motif if it is somehow exceptional. Instead of simply using a frequency vector, motif based approaches construct a significance profile that associates an importance to each subgraph, typically related to how overrepresented it is. This concept first appeared in 2002 and it was first defined as subgraphs that occurred more often than expected when compared against a null model . The most common null model is to keep the degree sequence and with this we can obtain characteristic network fingerprints that have been shown to be very rich and capable of classifying networks into distinct superfamilies . Network motif analysis has since been in a vast range of applications, such as in the analysis of biological networks (e.g., brain BIB001 , regulation and protein interaction BIB002 or food webs BIB004 ), social networks (e.g., co-authorship BIB011 or online social networks BIB014 ), sports analytics (e.g., football passing ) or software networks (e.g., software architecture BIB005 or function-call graphs BIB018 ). In order to compute the significance profile of motifs in a graph G, most conceptual approaches rely on generating a large set of R(G) of similar randomized networks that serve as the desired null model. Thus, subgraph counting needs to be performed both on the original network and on the set of randomized networks. If the frequency of a subgraph S is significantly bigger in G than it its average frequency in R(G), we can consider S to be a network motif of G BIB003 . Other approaches try to avoid exhaustive generation of random networks and, thus, avoid also counting subgraphs on them, by following a more analytical approach capable of providing estimations of the expected frequencies (e.g., using an expected degree model BIB016 BIB006 BIB007 or a scale-free model BIB021 . Nevertheless, there is always the need of counting subgraphs in the original network. While network motifs are usually about induced subgraph occurrences BIB009 , there are some motif algorithms that count non-induced occurrences instead BIB012 BIB008 . Moreover, although most of the network motifs usages assume the previously mentioned statistical view on significance as overrepresentation, there are other possible approaches BIB022 such as using information theory concepts (e.g., motifs based on entropy BIB010 BIB013 , subgraph covers BIB015 , or minimum description length BIB017 ). We should also note that some approaches try to better navigate the space of "interesting" subgraphs, so that reaching larger motif sizes can be reached not by searching all possible larger k-subgraphs, but instead by leveraging computations of smaller motifs BIB019 BIB020 . Finally, we should note that several authors use the term motif to refer to small subgraphs, even when it does not imply any significance value beyond simple frequency on the original network.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Sequence comparison and alignment has had an enormous impact on our understanding of evolution, biology and disease. Comparison and alignment of biological networks will probably have a similar impact. Existing network alignments use information external to the networks, such as sequence, because no good algorithm for purely topological alignment has yet been devised. In this paper, we present a novel algorithm based solely on network topology, that can be used to align any two networks. We apply it to biological networks to produce by far the most complete topological alignments of biological networks to date. We demonstrate that both species phylogeny and detailed biological function of individual proteins can be extracted from our alignments. Topology-based alignments have the potential to provide a completely new, independent source of phylogenetic information. Our alignment of the protein-protein interaction networks of two very different species-yeast and human-indicate that even distant species share a surprising amount of network topology, suggesting broad similarities in internal cellular wiring across all life on Earth. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation: High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. ::: ::: Results: We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein–protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7% of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. ::: ::: Availability: Supplementary files and MI-GRAAL executables: http://bio-nets.doc.ic.ac.uk/MI-GRAAL/. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Network alignment can be used to transfer functional knowledge between conserved regions of different networks. Existing methods use a node cost function (NCF) to compare nodes across networks and an alignment strategy (AS) to find high-scoring alignments with respect to total NCF over all aligned nodes (or node conservation). Then, they evaluate alignments via a measure that is different than node conservation used to guide alignment construction. Typically, one measures edge conservation, but only after alignments are produced. Hence, we recently directly maximized edge conservation while constructing alignments, which improved their quality. Here, we aim to maximize both node and edge conservation during alignment construction to further improve quality. We design a novel measure of edge conservation that (unlike existing measures that treat each conserved edge the same) weighs conserved edges to favor edges with highly NCF-similar end-nodes. As a result, we introduce a novel AS, Weighted Alignment VotEr (WAVE), which can optimize any measures of node and edge conservation. Using WAVE on top of well-established NCFs improves alignments compared to existing methods that optimize only node or edge conservation or treat each conserved edge the same. We evaluate WAVE on biological data, but it is applicable in any domain. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation: Discovering and understanding patterns in networks of protein–protein interactions (PPIs) is a central problem in systems biology. Alignments between these networks aid functional understanding as they uncover important information, such as evolutionary conserved pathways, protein complexes and functional orthologs. A few methods have been proposed for global PPI network alignments, but because of NP-completeness of underlying sub-graph isomorphism problem, producing topologically and biologically accurate alignments remains a challenge. ::: ::: Results: We introduce a novel global network alignment tool, Lagrangian GRAphlet-based ALigner (L-GRAAL), which directly optimizes both the protein and the interaction functional conservations, using a novel alignment search heuristic based on integer programming and Lagrangian relaxation. We compare L-GRAAL with the state-of-the-art network aligners on the largest available PPI networks from BioGRID and observe that L-GRAAL uncovers the largest common sub-graphs between the networks, as measured by edge-correctness and symmetric sub-structures scores, which allow transferring more functional information across networks. We assess the biological quality of the protein mappings using the semantic similarity of their Gene Ontology annotations and observe that L-GRAAL best uncovers functionally conserved proteins. Furthermore, we introduce for the first time a measure of the semantic similarity of the mapped interactions and show that L-GRAAL also uncovers best functionally conserved interactions. In addition, we illustrate on the PPI networks of baker's yeast and human the ability of L-GRAAL to predict new PPIs. Finally, L-GRAAL's results are the first to show that topological information is more important than sequence information for uncovering functionally conserved interactions. ::: ::: Availability and implementation: L-GRAAL is coded in C++. Software is available at: http://bio-nets.doc.ic.ac.uk/L-GRAAL/. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation ::: Network alignment (NA) finds conserved regions between two networks. NA methods optimize node conservation (NC) and edge conservation (EC). Dynamic graphlet degree vectors (DGDVs) are a state-of-the-art dynamic NC measure, used within the fastest and most accurante NA method for temporal networks: DynaWAVE. Here, we use graphlet-orbit transitions (GoTs), a different graphlet-based measure of temporal node similarity, as a new dynamic NC measure within DynaWAVE, resulting in GoT-WAVE. ::: ::: ::: Results ::: On synthetic networks, GoT-WAVE improves DynaWAVE's accuracy by 30% and speed by 64%. On real networks, when optimizing only dynamic NC, the methods are complementary. Furthermore, only GoT-WAVE supports directed edges. Hence, GoT-WAVE is a promising new temporal NA algorithm, which efficiently optimizes dynamic NC.We provide a user-friendly user interface and source code for GoT-WAVE. ::: ::: ::: Availability and implementation ::: http://www.dcc.fc.up.pt/got-wave/. <s> BIB006
Orbit-Aware Approaches and Network Alignment. When authors use the term graphlet, they commonly take orbits into consideration, and use metrics such as the graphlet-degree distribution (GDD, see details in section 2.1), a concept that appeared in 2007 . In this way, graphlet algorithms count how many times each node appears in each orbit. Unlike motifs, graphlets do not usually need a null model (i.e., networks are directly compared by comparing their respective GDDs). These orbit-aware distributions can be used for comparing networks. For instance, they have shown that protein interaction networks are more akin to random geometric graphs than to traditional scale-free networks . Moreover, they are also used to compare nodes (using graphlet-degree vectors). This makes them useful for network alignment tasks, where one needs to establish topological similarity between nodes from different networks BIB001 . Several graphletbased network alignment algorithms have been proposed and shown to work very well for aligning biological networks BIB006 BIB002 BIB003 BIB005 BIB004 ].
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> Frequent subgraph mining is an active research topic in the data mining community. A graph is a general model to represent data and has been used in many domains like cheminformatics and bioinformatics. Mining patterns from graph databases is challenging since graph related operations, such as subgraph testing, generally have higher time complexity than the corresponding operations on itemsets, sequences, and trees, which have been studied extensively. We propose a novel frequent subgraph mining algorithm: FFSM, which employs a vertical search scheme within an algebraic graph framework we have developed to reduce the number of redundant candidates proposed. Our empirical study on synthetic and real datasets demonstrates that FFSM achieves a substantial performance gain over the current start-of-the-art subgraph mining algorithm gSpan. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> Graph mining is an important research area within the domain of data mining. The field of study concentrates on the identification of frequent subgraphs within graph data sets. The research goals are directed at: (i) effective mechanisms for generating candidate subgraphs (without generating duplicates) and (ii) how best to process the generated candidate subgraphs so as to identify the desired frequent subgraphs in a way that is computationally efficient and procedurally effective. This paper presents a survey of current research in the field of frequent subgraph mining and proposes solutions to address the main research issues. <s> BIB003
. FSM algorithms find subgraphs that have a support higher than a given threshold. The most prevalent branch of FSM takes as input a bundle of networks and finds which subgraphs appear in a vast number of them -refereed to as graph transaction based FSM BIB003 . These algorithms BIB002 BIB001 heavily rely on the Downward Closure Property (DCP) to efficiently prune the search space. Algorithms for subgraph counting, which is our focus, can not, in general, rely on the DCP since it is not possible to know if growing an infrequent k-node subgraph will result, or not, in a frequent k + 1 subgraph. Furthermore, we are not only interested in frequent subgraphs but in all of them, since rare subgraphs can also give information about the network's topology. A less prominent branch of FSM, single graph based FSM, targets frequent subgraphs in a single large network, much like our subgraph counting problem. However, they adopt various support metrics that allow for the DCP to be verified, which, as stated previously, is not the case in the general subgraph counting problem BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Complex networks from domains like Biology or Sociology are present in many e-Science data sets. Dealing with networks can often form a workflow bottleneck as several related algorithms are computationally hard. One example is detecting characteristic patterns or "network motifs" - a problem involving subgraph mining and graph isomorphism. This paper provides a review and runtime comparison of current motif detection algorithms in the field. We present the strategies and the corresponding algorithms in pseudo-code yielding a framework for comparison. We categorize the algorithms outlining the main differences and advantages of each strategy. We finally implement all strategies in a common platform to allow a fair and objective efficiency comparison using a set of benchmark networks. We hope to inform the choice of strategy and critically discuss future improvements in motif detection. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs are statistically overrepresented sub-structures (sub-graphs) in a network, and have been recognized as ‘the simple building blocks of complex networks’. Study of biological network motifs may reveal answers to many important biological questions. The main difficulty in detecting larger network motifs in biological networks lies in the facts that the number of possible sub-graphs increases exponentially with the network or motif size (node counts, in general), and that no known polynomial-time algorithm exists in deciding if two graphs are topologically equivalent. This article discusses the biological significance of network motifs, the motivation behind solving the motif-finding problem, and strategies to solve the various aspects of this problem. A simple classification scheme is designed to analyze the strengths and weaknesses of several existing algorithms. Experimental results derived from a few comparative studies in the literature are discussed, with conclusions that lead to future research directions. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> In recent years, there has been a great interest in studying different aspects of complex networks in a range of fields. One important local property of networks is network motifs, recurrent and statistically significant sub-graphs or patterns, which assists researchers in the identification of functional units in the networks. Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging. Therefore several algorithms have been introduced to resolve this computationally hard problem. These algorithms can be classified under various paradigms such as exact counting methods, sampling methods, pattern growth methods and so on. Here, the authors will give a review on computational aspects of major algorithms and enumerate their related benefits and drawbacks from an algorithmic perspective. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motif is defined as a frequent and unique subgraph pattern in a network, and the search involves counting all the possible instances or listing all patterns, testing isomorphism known as NP-hard and large amounts of repeated processes for statistical evaluation. Although many efficient algorithms have been introduced, exhaustive search methods are still infeasible and feasible approximation methods are yet implausible. Additionally, the fast and continual growth of biological networks makes the problem more challenging. As a consequence, parallel algorithms have been developed and distributed computing has been tested in the cloud computing environment as well. In this paper, we survey current algorithms for network motif detection and existing software tools. Then, we show that some methods have been utilized for parallel network motif search algorithms with static or dynamic load balancing techniques. With the advent of cloud computing services, network motif search has been implemented with MapReduce in Hadoop Distributed File System (HDFS), and with Storm, but without statistical testing. In this paper, we survey network motif search algorithms in general, including existing parallel methods as well as cloud computing based search, and show the promising potentials for the cloud computing based motif search methods. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motif detection is the search for statistically overrepresented subgraphs present in a larger target network. They are thought to represent key structure and control mechanisms. Although the problem is exponential in nature, several algorithms and tools have been developed for efficiently detecting network motifs. This work analyzes 11 network motif detection tools and algorithms. Detailed comparisons and insightful directions for using these tools and algorithms are discussed. Key aspects of network motif detection are investigated. Network motif types and common network motifs as well as their biological functions are discussed. Applications of network motifs are also presented. Finally, the challenges, future improvements and future research directions for network motif detection are also discussed. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Counting and enumeration of local topological structures, such as triangles, is an important task for analyzing large real-life networks. For instance, triangle count in a network is used to compute transitivity—an important property for understanding graph evolution over time. Triangles are also used for various other tasks completed for real-life networks, including community discovery, link prediction, and spam filtering. The task of triangle counting, though simple, has gained wide attention in recent years from the data mining community. This is due to the fact that most of the existing algorithms for counting triangles do not scale well to very large networks with millions (or even billions) of vertices. To circumvent this limitation, researchers proposed triangle counting methods that approximate the count or run on distributed clusters. In this paper, we discuss the existing methods of triangle counting, ranging from sequential to parallel, single-machine to distributed, exact to approximate, and off-line to streaming. We also present experimental results of performance comparison among a set of approximate triangle counting methods built under a unified implementation framework. Finally, we conclude with a discussion of future works in this direction. ::: ::: For further resources related to this article, please visit the WIREs website. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs provide an enlightening insight into uncovering the structural design principles of complex networks across multifarious disciplines, such as physics, biology, social science, engineering, and military science. Measures for network motifs play an indispensable role in the procedures of motif measurement and evaluation which are crucial steps in motif detection, counting, and clustering. However, there is a relatively small body of literature concerned with measures for network motifs. In this paper, we review the measures for network motifs in two categories: structural measures and statistical measures. The application scenarios for each measure and the distinctions of measures in similar scenarios are also summarized. We also conclude the challenges for using these measures and put forward some future directions on this topic. Overall, the objective of this survey is to provide an overview of motif measures, which is anticipated to shed light on the theory and practice of complex networks. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs are the building blocks of complex networks. Studying these frequently occurring patterns disclose a lot of information about these networks. The applications of Network motifs are very much evident now-a-days, in almost every field including biological networks, World Wide Web (WWW), etc. Some of the important motifs are feed forward loops, bi-fan, bi-parallel, fully connected triads. But, discovering these motifs is a computationally challenging task. In this paper, various techniques that are used to discover motifs are presented, along with detailed discussions on several issues and challenges in this area. <s> BIB008
To the best of our knowledge there is no other comparable work to this survey in terms of scope, thoroughness and recency. Most of the already existing surveys that deal with subgraph counting are directly related to network motif discovery. Some of them are from before 2015 and therefore predate many of the most recent algorithmic advances BIB004 BIB003 BIB001 BIB005 BIB002 , and all of them only present a small subset of the strategies discussed here. There are more recent review papers, but they all differ from our work and have a much smaller scope. Al Hasan and Dave BIB006 only consider triangle counting, Xia et al. BIB007 focus mainly on significance metrics, and finally, while we here present a structured overview of more than 50 exact, approximate and parallel algorithmic approaches, Jain and Patgiri BIB008 presents a much simpler description of 5 different algorithms.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> Network motifs are small connected sub-graphs occurring at significantly higher frequencies in a given graph compared with random graphs of similar degree distribution. Recently, network motifs have attracted attention as a tool to study networks microscopic details. The commonly used algorithm for counting small-scale motifs is the one developed by Milo et al. This algorithm is extremely costly in CPU time and actually cannot work on large networks, consisting of more than 100,000 edges on current CPUs. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> BackgroundComplex networks are studied across many fields of science and are particularly important to understand biological processes. Motifs in networks are small connected sub-graphs that occur significantly in higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Existing algorithms for finding network motifs are extremely costly in CPU time and memory consumption and have practically restrictions on the size of motifs.ResultsWe present a new algorithm (Kavosh), for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms. Our algorithm is based on counting all k-size sub-graphs of a given graph (directed or undirected). We evaluated our algorithm on biological networks of E. coli and S. cereviciae, and also on non-biological networks: a social and an electronic network.ConclusionThe efficiency of our algorithm is demonstrated by comparing the obtained results with three well-known motif finding tools. For comparison, the CPU time, memory usage and the similarities of obtained motifs are considered. Besides, Kavosh can be employed for finding motifs of size greater than eight, while most of the other algorithms have restriction on motifs with size greater than eight. The Kavosh source code and help files are freely available at: http://Lbb.ut.ac.ir/Download/LBBsoft/Kavosh/. <s> BIB003
In the seminal work, Milo et al. first defined the concept of network motif and also proposed MFinder, an algorithm to count subgraphs. MFinder is a recursive backtracking algorithm, that is applied to each edge of the network. A given edge is initially stored on a set S, which is recursively grown using edges that are not in S but share one endpoint with at least one edge in S. When |S | = k, the algorithm checks if the subgraph induced by S has been found for the first time by keeping a hash table of subgraphs already found. If the subgraph was reached for the first time, the algorithm categorizes it and updates the hash table (otherwise, the subgraph is ignored). Another very important work, by Wernicke BIB001 , proposed a new algorithm called ESU, also known as FANMOD due to the graphical tool that uses ESU as its core algorithm . This algorithm greatly improved on MFinder by never counting the same subgraph twice, thus avoiding the need to store all subgraphs in a hash table. ESU applies the same recursive method to each vertex v of the input graph G: it uses two sets V S and V E , which initially are set as V S = {v} and V E = N (v). Then, for each vertex u in V E , it removes it from V E and makes V S = V S ∪ {u}, effectively adding it to the subgraph being enumerated and where v is the original vertex to be added to V S ). The N exc here makes sure we only grow the list of possibilities with vertices not already in V S and the condition L(u) > L(v) is used to break symmetries, consequently preventing any subgraph from being found twice. This process is done several times until V S has k elements, which means V S contains a single occurrence of a k-subgraph. At the end of the process, ESU performs isomorphism tests to assess the category of each subgraph occurrence, which is a considerable bottleneck. Itzhack et al. BIB002 proposed a new algorithm that was able to count subgraphs using constant memory (in relation to the size of the input graph). Itzhack et al. did not name their algorithm, so we will refer to it as Itzhack from here on. Itzhack avoids explicitly computing the isomorphism class of each counted subgraph by caching it for each different adjacency matrix, seen as a bitstring. This strategy only works for subgraphs of k up to 5, since it would use too much memory for higher values. Additionally, the enumeration algorithm is also different from ESU. This method is based on counting all subgraphs that include a certain vertex, then removing that node from the network and repeating the same procedure for the remaining nodes. For each vertex v, first the algorithm considers the tree composed of the k neighborhood of v, that is, a tree of all vertices at a distance of k − 1 or less from v. This is very similar to the tree obtained from performing a breadth-first search starting on v, with the difference that vertices that appear on previous levels of the tree are excluded if visited again. This tree can be traversed in a way that avoids actually creating it by following neighbors, and thus only using constant memory. To perform the actual search, the method uses the concept of counting patterns, which are different combinatorial ways of choosing vertices from different levels of the tree. For instance, if we are searching for 3-subgraphs, and considering that at the tree root level we can only have one vertex, we could have the combinations with pattern 1-2 (one vertex at root level 0, two vertices at level 1) or with pattern 1-1-1 (one vertex at root level 0, one at level 1 and one at level 2). In an analogous way, 4-subgraphs would lead to patterns 1-1-1-1, 1-1-2, 1-2-1 and 1-3. Itzhack et al. claimed that Itzhack is over 1,000 times faster than ESU, however the author of ESU disputed this claim in , stating that the experimental setup was faulty and claimed that Itzhack is only slightly faster than ESU (its speedup could be attributed mainly to the caching procedure). Kashani et al. BIB003 proposed a new algorithm called Kavosh. Like ESU and Itzhack, the core idea of the Kavosh is to find all subgraphs that include a particular vertex, then remove that vertex and continue from there iteratively. Its functioning is very similar to that of Itzhack: it builds an implicit breadth-first search tree and then uses a similar concept to the counting patterns used by Itzhack. However, it is a more general method since it does not perform any caching of isomorphism information, allowing the enumeration of larger subgraphs.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> The study of biological networks and network motifs can yield significant new insights into systems biology. Previous methods of discovering network motifs - network-centric subgraph enumeration and sampling - have been limited to motifs of 6 to 8 nodes, revealing only the smallest network components. New methods are necessary to identify larger network sub-structures and functional motifs. ::: ::: Here we present a novel algorithm for discovering large network motifs that achieves these goals, based on a novel symmetry-breaking technique, which eliminates repeated isomorphism testing, leading to an exponential speed-up over previous methods. This technique is made possible by reversing the traditional network-based search at the heart of the algorithm to a motif-based search, which also eliminates the need to store all motifs of a given size and enables parallelization and scaling. Additionally, our method enables us to study the clustering properties of discovered motifs, revealing even larger network elements. ::: ::: We apply this algorithm to the protein-protein interaction network and transcription regulatory network of S. cerevisiae, and discover several large network motifs, which were previously inaccessible to existing methods, including a 29-node cluster of 15-node motifs corresponding to the key transcription machinery of S. cerevisiae. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> Subgraph matching algorithms are used to find and enumerate specific interconnection structures in networks. By enumerating these specific structures/subgraphs, the fundamental properties of the network can be derived. More specifically in biological networks, subgraph matching algorithms are used to discover network motifs, specific patterns occurring more often than expected by chance. Finding these network motifs yields information on the underlying biological relations modelled by the network. In this work, we present the Index-based Subgraph Matching Algorithm with General Symmetries (ISMAGS), an improved version of the Index-based Subgraph Matching Algorithm (ISMA). ISMA quickly finds all instances of a predefined motif in a network by intelligently exploring the search space and taking into account easily identifiable symmetric structures. However, more complex symmetries (possibly involving switching multiple nodes) are not taken into account, resulting in superfluous output. ISMAGS overcomes this problem by using a customised symmetry analysis phase to detect all symmetric structures in the network motif subgraphs. These structures are then converted to symmetry-breaking constraints used to prune the search space and speed up calculations. The performance of the algorithm was tested on several types of networks (biological, social and computer networks) for various subgraphs with a varying degree of symmetry. For subgraphs with complex (multi-node) symmetric structures, high speed-up factors are obtained as the search space is pruned by the symmetry-breaking constraints. For subgraphs with no or simple symmetric structures, ISMAGS still reduces computation times by optimising set operations. Moreover, the calculated list of subgraph instances is minimal as it contains no instances that differ by only a subgraph symmetry. An implementation of the algorithm is freely available at https://github.com/mhoubraken/ISMAGS. <s> BIB003
Single-subgraph-search methods. The idea that it is possible to obtain a very efficient method of counting a single subgraph category was first noticed by Grochow and Kellis BIB001 . Their base method consists on a backtracking algorithm that is applied to each vertex. It tries to build a partial mapping from the input graph to the target subgraph (the subgraph it is trying to count) by building all possible assignments based on the number of neighbours. Grochow and Kellis also suggested an improvement based on symmetry breaking, using the automorphisms of the target subgraph to build set of conditions, of the form L(a) < L(b), to prevent the same subgraph from being counted multiple times. This symmetry breaking idea allowed for considerable improvements in runtime, specially for higher values of k. Grochow and Kellis did not name their algorithm, so we will refer to it as the Grochow algorithm from here on. Koskas et al. presented a new algorithm which they called NeMo. This method draws some ideas from Grochow, since it performs a backtrack based search with symmetry breaking in a similar fashion. Although, instead of using conditions on vertex labels, it finds the orbits of the target subgraph and forces an ordering between the labels of the vertices from the input graph that match vertices in the target subgraph with the same orbit. Additionally, it uses a few heuristics to prune the search early, such as ordering the vertices from the target graph such that for all 1 ≤ i ≤ k, its first i vertices are connected. ISMAGS, which is based on its predecessor ISMA BIB002 , was proposed by Houbraken et al. BIB003 . The base idea of this method is similar to the one in Grochow, however, the authors use a clever node ordering and other heuristics to speedup the partial mapping procedure. Additionally, their symmetry breaking conditions are significantly improved by applying several heuristic techniques based on group theory.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> We begin by tracing the history of the Reconstruction Conjecture (RC) for graphs. After describing the RC as the problem of reconstructing a graph G from a given deck of cards, each containing just one point-deleted subgraph of G, we proceed to derive information about G which is deducible from this deck. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> We consider the size and structure of the automorphism groups of a variety of empirical 'real-world' networks and find that, in contrast to classical random graph models, many real-world networks are richly symmetric. We construct a practical network automorphism group decomposition, relate automorphism group structure to network topology and discuss generic forms of symmetry and their origin in real-world networks. We also comment on how symmetry can affect network redundancy and robustness. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> In this paper we propose a novel specialized data structure that we call g-trie, designed to deal with collections of subgraphs. The main conceptual idea is akin to a prefix tree in the sense that we take advantage of common topology by constructing a multiway tree where the descendants of a node share a common substructure. We give algorithms to construct a g-trie, to list all stored subgraphs, and to find occurrences on another graph of the subgraphs stored in the g-trie. We evaluate the implementation of this structure and its associated algorithms on a set of representative benchmark biological networks in order to find network motifs. To assess the efficiency of our algorithms we compare their performance with other known network motif algorithms also implemented in the same common platform. Our results show that indeed, g-tries are a feasible, adequate and very efficient data structure for network motifs discovery, clearly outperforming previous algorithms and data structures. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> A motif in a network is a connected graph that occurs significantly more frequently as an induced subgraph than would be expected in a similar randomized network. By virtue of being atypical, it is thought that motifs might play a more important role than arbitrary subgraphs. Recently, a flurry of advances in the study of network motifs has created demand for faster computational means for identifying motifs in increasingly larger networks. Motif detection is typically performed by enumerating subgraphs in an input network and in an ensemble of comparison networks; this poses a significant computational problem. Classifying the subgraphs encountered, for instance, is typically performed using a graph canonical labeling package, such as Nauty, and will typically be called billions of times. In this article, we describe an implementation of a network motif detection package, which we call NetMODE. NetMODE can only perform motif detection for -node subgraphs when , but does so without the use of Nauty. To avoid using Nauty, NetMODE has an initial pretreatment phase, where -node graph data is stored in memory (). For we take a novel approach, which relates to the Reconstruction Conjecture for directed graphs. We find that NetMODE can perform up to around times faster than its predecessors when and up to around times faster when (the exact improvement varies considerably). NetMODE also (a) includes a method for generating comparison graphs uniformly at random, (b) can interface with external packages (e.g. R), and (c) can utilize multi-core architectures. NetMODE is available from netmode.sf.net. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks' structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Determining the frequency of small subgraphs is an important computational task lying at the core of several graph mining methodologies, such as network motifs discovery or graphlet based measurements. In this paper we try to improve a class of algorithms available for this purpose, namely network-centric algorithms, which are based upon the enumeration of all sets of k connected nodes. Past approaches would essentially delay isomorphism tests until they had a finalized set of k nodes. In this paper we show how isomorphism testing can be done during the actual enumeration. We use a customized g-trie, a tree data structure, in order to encapsulate the topological information of the embedded subgraphs, identifying already known node permutations of the same subgraph type. With this we avoid redundancy and the need of an isomorphism test for each subgraph occurrence. We tested our algorithm, which we called FaSE, on a set of different real complex networks, both directed and undirected, showcasing that we indeed achieve significant speedups of at least one order of magnitude against past algorithms, paving the way for a faster network-centric approach. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Network motifs are small over represented patterns that have been used successfully to characterize complex networks. Current algorithmic approaches focus essentially on pure topology and disregard node and edge nature. However, it is often the case that nodes and edges can also be classified and separated into different classes. This kind of networks can be modeled by colored (or labeled) graphs. Here we present a definition of colored motifs and an algorithm for efficiently discovering them.We use g-tries, a specialized data-structure created for finding sets of subgraphs. G-Tries encapsulate common sub-structure, and with the aid of symmetry breaking conditions and a customized canonization methodology, we are able to efficiently search for several colored patterns at the same time. We apply our algorithm to a set of representative complex networks, showing that it can find colored motifs and outperform previous methods. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Network motifs are overly represented as topological patterns that occur more often in a given network than in random networks, and take on some certain functions in practical biological applications. Existing methods of detecting network motifs have focused on computational efficiency. However, detecting network motifs also presents huge challenges in computational and spatial complexity. In this paper, we provide a new approach for mining network motifs. First, all sub-graphs can be enumerated by adding edges and nodes progressively, using the backtracking method based on the associated matrix. Then, the associated matrix is standardized and the isomorphism sub-graphs are marked uniquely in combination with symmetric ternary, which can simulate the elements (-1,0,1) in the associated matrix. Taking advantage of the combination of the associated matrix and the backtracking method, our method reduces the complexity of enumerating sub-graphs, providing a more efficient solution for motif mining. From the results obtained, our method has shown higher speed and more extensive applicability than other similar methods. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> A network motif is a recurring subnetwork within a network, and it takes on certain functions in practical biological macromolecule applications. Previous algorithms have focused on the computational efficiency of network motif detection, but some problems in storage space and searching time manifested during earlier studies. The considerable computational and spacial complexity also presents a significant challenge. In this paper, we provide a new approach for motif mining based on compressing the searching space. According to the characteristic of the parity nodes, we cut down the searching space and storage space in real graphs and random graphs, thereby reducing the computational cost of verifying the isomorphism of sub-graphs. We obtain a new network with smaller size after removing parity nodes and the “repeated edges” connected with the parity nodes. Random graph structure and sub-graph searching are based on the Back Tracking Method; all sub-graphs can be searched for by adding edges progressively. Experimental results show that this algorithm has higher speed and better stability than its alternatives. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Because of the complexity of biological networks, motif mining is a key problem in data analysis for such networks. Researchers have investigated many algorithms aimed at improving the efficiency of motif mining. Here we propose a new algorithm for motif mining that is based on dynamic programming and backtracking. In our method, firstly, we enumerate all of the 3-vertex sub graphs by the method ESU, and then we enumerate sub graphs of other sizes using dynamic programming for reducing the search time. In addition, we have also improved the backtracking application in searching sub graphs, and the improved backtracking can help us search sub graphs more roundly. Comparisons with other algorithms demonstrate that our algorithm yields faster and more accurate detection of motifs. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> With recent advances in high-throughput cell biology, the amount of cellular biological data has grown drastically. Such data is often modeled as graphs (also called networks) and studying them can lead to new insights into molecule-level organization. A possible way to understand their structure is by analyzing the smaller components that constitute them, namely network motifs and graphlets. Graphlets are particularly well suited to compare networks and to assess their level of similarity due to the rich topological information that they offer but are almost always used as small undirected graphs of up to five nodes, thus limiting their applicability in directed networks. However, a large set of interesting biological networks such as metabolic, cell signaling, or transcriptional regulatory networks are intrinsically directional, and using metrics that ignore edge direction may gravely hinder information extraction. Our main purpose in this work is to extend the applicability of graphlets to directed networks by considering their edge direction, thus providing a powerful basis for the analysis of directed biological networks. We tested our approach on two network sets, one composed of synthetic graphs and another of real directed biological networks, and verified that they were more accurately grouped using directed graphlets than undirected graphlets. It is also evident that directed graphlets offer substantially more topological information than simple graph metrics such as degree distribution or reciprocity. However, enumerating graphlets in large networks is a computationally demanding task. Our implementation addresses this concern by using a state-of-the-art data structure, the g-trie, which is able to greatly reduce the necessary computation. We compared our tool to other state-of-the art methods and verified that it is the fastest general tool for graphlet counting. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> In this paper we propose PATCOMP—a PARTICIA-based novel approach for Network motif search. The algorithm takes advantage of compression and speed of PATRICIA data structure to store the collection of subgraphs in memory and search for classification and census of network. Paper also describes the structure of PATRICIA nodes and how data structure is developed for using it for counting of subgraphs. The main benefit of this approach is significant reduction in memory space requirement particularly for larger network motifs with acceptable time performance. To assess the effectiveness of PATRICIA-based approach we compared the performance (memory and time) of this proposed approach with QuateXelero. The experiments with different networks like ecoli and yeast validate the advantage of PATRICIA-based approach in terms of reduction in memory usage by 4.4–20% for E. coli and 5.8–23.2% for yeast networks. <s> BIB014
The ideas applied to Grochow introduced a way of escaping the classic setup of enumerating and then categorizing subgraphs, albeit focusing on a single subgraph. The next step would be to extend this idea to a more general algorithm, which is appropriate to a full subgraph counting. This was first done by Ribeiro and Silva BIB003 using a new data-structure they called the g-trie, for graph trie. The g-trie is a prefix tree for graphs, each node represents a different graph, where the graph of a parent node has shared common substructures with the graph of its child node, which are characterized precisely by the vertices of the graph of the child node. The root represents the one vertex graph with one child, a node representing the edge graph, which in turn has two children representing the triangle graph and the 3-path, and so on. This tree can be augmented by giving each node symmetry breaking conditions similar to those from Grochow. The authors show how to efficiently build this data-structure and augment with the symmetry breaking conditions for any set of graphs. Also, they describe a subgraph counting algorithm based on using this data-structure along with an enumeration technique similar to that of Grochow. However, since this data-structure encapsulates the information of multiple graphs in an hierarchical order, it achieves a much faster full subgraph counting algorithm. The usage of this data-structure has been significantly extended since its original publication, such as a version for colored networks BIB009 or an orbit aware version BIB013 . A more detailed discussion of the data-structure and the subgraph counting algorithm is presented in BIB006 . Also, even though the subgraph counting algorithm was not named, we will refer to it as the Gtrie algorithm from here on. Gtrie encapsulates common topological information of the subgraphs being counted, but there are other approaches, such as Li et al. BIB004 , who developed Netmode. It builds on Kavosh, by using its enumeration algorithm, but instead of using nauty to perform the categorization step, it makes use of a cache to store isomorphism information and thus is able to perform it in constant time. This is very similar to what Itzhack does, however, Li et al. suggested an improvement that allows Netmode to scale to k = 6 without using too much memory. This improvement is based on the reconstruction conjecture BIB001 , that states that two graphs with 3 or more vertices are isomorphic if their deck (the set of isomorphism classes of all vertex-deleted subgraphs of a graph) is the same. This is known to be false for directed graphs with k = 6, but there are very few counter-examples that can be directly stored such as in the k ≤ 5 case, thus Netmode applies the conjecture for all the remaining cases by building their deck, hashing its value and storing its count in a table. Wang et al. BIB005 proposed a new method called SCMD that counts subgraphs in compressed networks. SCMD applies a symmetry compression method that finds sets of vertices that are in an homeomorphism to cliques or empty subgraphs, which have the additional property that any other vertex that connects to a vertex in the set is connected to all other vertices in the set. These sets of vertices form a partition of the graph that is obtained using a method published in BIB002 , which is based on looking at vertices in the same orbit. This is a versatile method that can use algorithms like ESU or Kavosh to enumerate all subgraphs of sizes from 1 to k in the compressed network. Finally, SCMD "decompresses" the results by looking at all the different enumerated subgraphs and calculating all the combinations that can form a decompressed subgraph. For example, for k = 3, if a compressed 2-subgraph is found containing two vertices: one compressed vertex representing a clique of 5 uncompressed vertices and a compressed vertex representing a single vertex from the uncompressed graph, it results in 2 + 5 3 triangles from the uncompressed graph, obtained by taking two vertices from the clique vertex and one from the other vertex, which are all connected and thus form a triangle, 2 , plus taking three vertices from the clique vertex 5 3 . The authors argue that most complex networks exhibit high symmetries and thus are improved by the application of this technique. Even though their work only includes undirected graphs, the authors affirm it is easy to extend the same concepts to directed networks. Xu et al. described another algorithm that enumerates subgraphs on compressed networks, called ENSA BIB010 BIB011 . Their method is based on an heuristic graph isomorphism algorithm, and they also discuss an optimization based on identifying vertices with unique degrees. Following the ideas first applied in Gtrie, Khakabimamaghani et al. BIB007 proposed a new algorithm they called Quatexelero. Quatexelero is built upon any incremental enumeration algorithm, like ESU, and it implements a data structure similar to a quaternary tree. Each node in the tree represents a graph, that can be built by looking into the nodes from the path from it to the root of the tree. Additionally, all graphs represented by a single node belong to the same isomorphism class. To fill the tree, initially a pointer to the root of the tree is set. Whenever a new vertex is added to the partial enumeration map, Quatexelero looks into the existing edges between the newly added node and the previously existing nodes in the mapping and stores its information in the quaternary tree. For each vertex in the mapping, depending on whether there is no edge, an inedge, an outedge or a biedge between it and the newly added vertex, the pointer is assigned to one of its four children, creating it if it was nonexistent. Parallel to the publishing of the work of Quatexelero, Paredes and Ribeiro BIB008 proposed FaSE. The idea of FaSE is similar to the one from Quatexelero, however, instead of using a quaternary tree, it uses a data-structure similar to the g-trie, albeit without the symmetry breaking condition augmentation. This data-structure has the same property as the quaternary tree that every node represents a graph and each node is built using the adjacency information of a newly added vertex in relation to the vertices present in its parent. Other works that extend these ideas have been proposed subsequently. For example, Jing and Cheng propose Hash-ESU, an algorithm based on the same idea from Quatexelero and FaSE, but which hashes the adjacency information instead of storing it in a tree. Another example is the work by Song et al. BIB012 . They describe a method that starts by enumerating all k = 3 subgraphs using ESU and then use dynamic programming to grow connected sets and perform the counting. Their algorithm was not named, so we will refer to it as the Song algorithm from here on. Both Quatexelero and FaSE have potential memory issues, since there may be several nodes representing the same graph, which is not a problem for Gtrie since it only stores one copy of each possible graph. To address this, Himamshu and Jain BIB014 proposed Patcomp. Their method compresses the quaternary tree using a technique similar to a radix tree, however, their method is 2 to 3 times slower and only saves around 10% of the memory usage.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Motivation: Small-induced subgraphs called graphlets are emerging as a possible tool for exploration of global and local structure of networks and for analysis of roles of individual nodes. One of the obstacles to their wider use is the computational complexity of algorithms for their discovery and counting. Results: We propose a new combinatorial method for counting graphlets and orbit signatures of network nodes. The algorithm builds a system of equations that connect counts of orbits from graphlets with up to five nodes, which allows to compute all orbit counts by enumerating just a single one. This reduces its practical time complexity in sparse graphs by an order of magnitude as compared with the existing pure enumeration-based algorithms. Availability and implementation: Source code is available freely at http://www.biolab.si/supp/orca/orca.html. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> The prevalence of select substructures is an indicator of network effects in applications such as social network analysis and systems biology. Moreover, subgraph statistics are pervasive in stochastic network models, and they need to be assessed repeatedly in MCMC sampling and estimation algorithms. We present a new approach to count all induced and non-induced 4-node subgraphs the quad census on a per-node and per-edge basis, complete with a separation into their non-automorphic roles in these subgraphs. It is the first approach to do so in a unified manner, and is based on only a clique-listing subroutine. Computational experiments indicate that, despite its simplicity, the approach outperforms previous, less general approaches. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The approach presented in this paper presents a generalization of the currently fastest method for counting 5-node graphlets in bioinformatics. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and a cycle with four nodes, which are treated separately. Empirical analysis of running time agrees with the theoretical results. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Motivation: Graphlets are a useful tool to determine a graph's small-scale structure. Finding them is exponentially hard with respect to the number of nodes in each graphlet. Therefore, equations can be used to reduce the size of graphlets that need to be enumerated to calculate the number of each graphlet touching each node. Hocevar and Demsar first introduced such equations, which were derived manually, and an algorithm that uses them, but only graphlets with four or five nodes can be counted this way. Results: We present a new algorithm for orbit counting, which is applicable to graphlets of any order. This algorithm uses a tree structure to simplify finding orbits, and stabilizers and symmetry-breaking constraints to ensure correctness. This method gives a significant speedup compared to a brute force counting method and can count orbits beyond the capacity of other available tools. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Graphlets are useful for bioinformatics network analysis. Based on the structure of Hocevar and Demsar’s ORCA algorithm, we have created an orbit counting algorithm, named Jesse. This algorithm, like ORCA, uses equations to count the orbits, but unlike ORCA it can count graphlets of any order. To do so, it generates the required internal structures and equations automatically. Many more redundant equations are generated, however, and Jesse’s running time is highly dependent on which of these equations are used. Therefore, this paper aims to investigate which equations are most efficient, and which factors have an effect on this efficiency. With appropriate equation selection, Jesse’s running time may be reduced by a factor of up to 2 in the best case, compared to using randomly selected equations. Which equations are most efficient depends on the density of the graph, but barely on the graph type. At low graph density, equations with terms in their right-hand side with few arguments are more efficient, whereas at high density, equations with terms with many arguments in the right-hand side are most efficient. At a density between 0.6 and 0.7, both types of equations are about equally efficient. Our Jesse algorithm became up to a factor 2 more efficient, by automatically selecting the best equations based on graph density. It was adapted into a Cytoscape App that is freely available from the Cytoscape App Store to ease application by bioinformaticians. <s> BIB005
Matrix based methods. The first known method to apply a practical analytic approach based on matrix multiplication to subgraph counting was ORCA, a work by Hočevar and Demšar BIB001 , which is based on counting orbits and not directly subgraphs. Their original work was targeted at orbits in subgraphs up to 5 vertices and, because of that, they count induced subgraphs specifically, while most analytic approaches count non-induced occurrences. ORCA works by setting up a system of linear equations per vertex of the input graph that relate different orbit frequencies, which are the system's variables. This system of linear equations contains information about the input graph. By construction, the matrix has a rank equal to the number of orbits minus 1, thus to solve it one only need to find the value of one the orbit frequencies and use any standard linear algebra method to solve it. Usually, the orbit pertaining to the clique is chosen, since there are efficient algorithms to count this orbit and, for sparse enough networks, it is usually the one with the least occurrences, making it less expensive to count. Later, the authors of ORCA extended their work by suggesting a way of producing equations for arbitrary sized subgraphs BIB003 , although their available practical implementation is still limited to size 5 [64] . Another possible extension for ORCA was proposed by BIB004 with the Jesse algorithm, which was further complemented with a strategy for optimizing the computation by carefully selecting less expensive equations BIB005 . Similar to ORCA, but using a different strategy, Ortmann and Brandes BIB002 proposed a new method, which they further improved and better described in . They also target orbits, but for subgraphs of size up to 4. Their approach is based on looking into non-induced subgraphs using them to build linear equations that are less expensive to compute. Additionally, they also apply an improved clique counting algorithm. Ortmann and Brandes BIB002 did not name their algorithm, so we will refer to it as the Ortmann algorithm from here on.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> World Wide Web, the Internet, coupled biological and chemical systems, neural networks, and social interacting species, are only a few examples of systems composed by a large number of highly interconnected dynamical units. These networks contain characteristic patterns, termed network motifs, which occur far more often than in randomized networks with the same degree sequence. Several algorithms have been suggested for counting or detecting the number of induced or non-induced occurrences of network motifs in the form of trees and bounded treewidth subgraphs of size O(logn), and of size at most 7 for some motifs. ::: ::: In addition, counting the number of motifs a node is part of was recently suggested as a method to classify nodes in the network. The promise is that the distribution of motifs a node participate in is an indication of its function in the network. Therefore, counting the number of network motifs a node is part of provides a major challenge. However, no such practical algorithm exists. ::: ::: We present several algorithms with time complexity $O\left(e^{2k}k\cdot n \cdot |E|\cdot \right.$ $\left.\log\frac{1}{\delta}/{\epsilon^2}\right)$ that, for the first time, approximate for every vertex the number of non-induced occurrences of the motif the vertex is part of, for k-length cycles, k-length cycles with a chord, and (k − 1)-length paths, where k = O(logn), and for all motifs of size of at most four. In addition, we show algorithms that approximate the total number of non-induced occurrences of these network motifs, when no efficient algorithm exists. Some of our algorithms use the color coding technique. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting network motifs has an important role in studying a wide range of complex networks. However, when the network size is large, as in the case of Internet Topology and WWW graphs counting the number of motifs becomes prohibitive. Devising efficient motif counting algorithms thus becomes an important goal. In this paper, we present efficient counting algorithms for 4-nodemotifs. We show how to efficiently count the total number of each type of motif, and the number of motifs adjacent to a node. We further present a new algorithm for node position-aware motif counting, namely partitioning the motif count by the node position in the motif. Since our algorithm is based on motifs, which are non-induced we also show how to calculate the count of induced motifs given the non-induced motif count. Finally, we report on initial implementation performance result using evaluation on a large-scale graph. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting network graphlets (and motifs) was shown to have an important role in studying a wide range of complex networks. However, when the network size is large, as in the case of the Internet topology and WWW graphs, counting the number of graphlets becomes prohibitive for graphlets of size 4 and above. Devising efficient graphlet counting algorithms thus becomes an important goal. In this paper, we present efficient counting algorithms for 4-node graphlets. We show how to efficiently count the total number of each type of graphlet, and the number of graphlets adjacent to a node. We further present a new algorithm for node position-aware graphlet counting, namely partitioning the graphlet count by the node position in the graphlet. Since our algorithms are based on non-induced graphlet count, we also show how to calculate the count of induced graphlets given the non-induced count. We implemented our algorithms on a set of both synthetic and real-world graphs. Our evaluation shows that the algorithms are scalable and perform up to 30 times faster than the state-of-the-art. We then apply the algorithms on the Internet Autonomous Systems (AS) graph, and show how fast graphlet counting can be leveraged for efficient and scalable classification of the ASes that comprise the Internet. Finally, we present RAGE, a tool for rapid graphlet enumeration available online. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Network motif algorithms have been a topic of research mainly after the 2002-seminal paper from Milo \emph{et al}, that provided motifs as a way to uncover the basic building blocks of most networks. This article proposes new algorithms to exactly count isomorphic pattern motifs of size~3 and~4 in directed graphs. The algorithms are accelerated by combinatorial techniques. Let $G(V, E)$ be a directed graph with $m=|E|$. We describe an $O({m\sqrt{m}})$ time complexity algorithm to count isomorphic patterns of size~3. To counting isomorphic patterns of size~4, we propose an $O(m^2)$ algorithm. The new algorithms were implemented and compared with Fanmod motif detection tool. The experiments show that our algorithms are expressively faster than Fanmod. We also let our tool to detect motifs, the {\sc acc-MOTIF}, available in the Internet. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Network motif algorithms have been a topic of research mainly after the 2002-seminal paper from Milo et al. [1], which provided motifs as a way to uncover the basic building blocks of most networks. Motifs have been mainly applied in Bioinformatics, regarding gene regulation networks. Motif detection is based on induced subgraph counting. This paper proposes an algorithm to count subgraphs of size k + 2 based on the set of induced subgraphs of size k. The general technique was applied to detect 3, 4 and 5-sized motifs in directed graphs. Such algorithms have time complexity O(a(G)m), O(m2) and O(nm2), respectively, where a(G) is the arboricity of G(V,E). The computational experiments in public data sets show that the proposed technique was one order of magnitude faster than Kavosh and FANMOD. When compared to NetMODE, acc-Motif had a slightly improved performance. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex or 5-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. ::: ::: We introduce an algorithmic framework that can be adopted to count any small pattern in a graph and apply this framework to compute exact counts for all 5-vertex subgraphs. Our framework is built on cutting a pattern into smaller ones, and using counts of smaller patterns to get larger counts. Furthermore, we exploit degree orientations of the graph to reduce runtimes even further. These methods avoid the combinatorial explosion that typical subgraph counting algorithms face. We prove that it suffices to enumerate only four specific subgraphs (three of them have less than 5 vertices) to exactly count all 5-vertex patterns. ::: ::: We perform extensive empirical experiments on a variety of real-world graphs. We are able to compute counts of graphs with tens of millions of edges in minutes on a commodity machine. To the best of our knowledge, this is the first practical algorithm for 5-vertex pattern counting that runs at this scale. A stepping stone to our main algorithm is a fast method for counting all 4-vertex patterns. This algorithm is typically ten times faster than the state of the art 4-vertex counters. <s> BIB008
Before ORCA was proposed, the first ever practical method that used an analytic approach to subgraph counting was Rage, by Marcus and Shavitt BIB002 BIB003 . Their method is based on BIB001 which employs similar techniques but with a more theoretical focus. Rage targets non-induced subgraphs and orbits of size 3 and 4. It does so by running a different algorithm for each of the 8 existing subgraphs. Each algorithm is based on merging the neighborhoods of pairs of vertices to ensure that a given quartet of vertices have the desired edges to form a certain subgraph. acc-Motif, which was proposed by Meira et al. BIB004 and then further improved in BIB005 , was also one of the first methods to employ an analytic strategy, but stands out as the only known analytic method that also works for directed subgraphs. acc-Motif also targets non-induced subgraphs and their latest version supports up to size 6 subgraphs. Another method that followed this trend of decomposition methods is PGD, proposed by Ahmed et al. BIB006 BIB007 . This method builds on the classic triangle counting algorithm to count several primitives that are then used to obtain the frequency of each subgraph and orbit. It is currently one of the fastest methods, however it can only count undirected subgraphs of size 3 and 4. Additionally, as most analytic methods, it is highly parallelizible. Due to its versatile nature, PGD has been expanded to other frequency metrics and it stands out as one of the only available efficient methods that can count motifs incident to a vertex or edge of the graph , in what is called a "local subgraph count". More recently, ESCAPE was proposed by Pinar et al. BIB008 . This method is based on a divide and conquer approach that identifies substructures of each counting subgraph to partition them into smaller patterns. It is a very general method, but with the correct choices for decomposition, it is possible to describe a set of formulas to compute the frequency of each subgraph. The original paper only describes the resulting formulas to subgraphs up to size 5, however larger sizes can be obtained with some effort. As of this writing, it is possibly the most efficient algorithm to count undirected subgraphs and orbits up to size 5.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> We give two algorithms for listing all simplicial vertices of a graph. The first of these algorithms takes O(nα) time, where n is the number of vertices in the graph and O(nα) is the time needed to perform a fast matrix multiplication. The second algorithm can be implemented to run in \(O(e^{\tfrac{{2\alpha }}{{\alpha + 1}}} ) = O(e^{1.41} )\), where e is the number of edges in the graph. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> Network motifs are small connected sub-graphs occurring at significantly higher frequencies in a given graph compared with random graphs of similar degree distribution. Recently, network motifs have attracted attention as a tool to study networks microscopic details. The commonly used algorithm for counting small-scale motifs is the one developed by Milo et al. This algorithm is extremely costly in CPU time and actually cannot work on large networks, consisting of more than 100,000 edges on current CPUs. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> World Wide Web, the Internet, coupled biological and chemical systems, neural networks, and social interacting species, are only a few examples of systems composed by a large number of highly interconnected dynamical units. These networks contain characteristic patterns, termed network motifs, which occur far more often than in randomized networks with the same degree sequence. Several algorithms have been suggested for counting or detecting the number of induced or non-induced occurrences of network motifs in the form of trees and bounded treewidth subgraphs of size O(logn), and of size at most 7 for some motifs. ::: ::: In addition, counting the number of motifs a node is part of was recently suggested as a method to classify nodes in the network. The promise is that the distribution of motifs a node participate in is an indication of its function in the network. Therefore, counting the number of network motifs a node is part of provides a major challenge. However, no such practical algorithm exists. ::: ::: We present several algorithms with time complexity $O\left(e^{2k}k\cdot n \cdot |E|\cdot \right.$ $\left.\log\frac{1}{\delta}/{\epsilon^2}\right)$ that, for the first time, approximate for every vertex the number of non-induced occurrences of the motif the vertex is part of, for k-length cycles, k-length cycles with a chord, and (k − 1)-length paths, where k = O(logn), and for all motifs of size of at most four. In addition, we show algorithms that approximate the total number of non-induced occurrences of these network motifs, when no efficient algorithm exists. Some of our algorithms use the color coding technique. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both node-weighted and edge-weighted graphs. Our results include: The number of copies of an H with an independent set of size s can be computed exactly in O*(2s nk-s+3) time. A minimum weight copy of such an H (with arbitrary real weights on nodes and edges) can be found in O(4s+o(s) nk-s+3) time. (The O* notation omits (k) factors.) These algorithms rely on fast algorithms for computing the permanent of a k x n matrix, over rings and semirings. The number of copies of any H having minimum (or maximum) node-weight (with arbitrary real weights on nodes) can be found in O(nω k/3 + n2k/3+o(1)) time, where ω < 2.4 is the matrix multiplication exponent and k is divisible by 3. Similar results hold for other values of k. Also, the number of copies having exactly a prescribed weight can be found within this time. These algorithms extend the technique of Czumaj and Lingas (SODA 2007) and give a new (algorithmic) application of multiparty communication complexity. Finding an edge-weighted triangle of weight exactly 0 in general graphs requires Ω(n2.5-ε) time for all ε > 0, unless the 3SUM problem on N numbers can be solved in O(N2 - ε) time. This suggests that the edge-weighted problem is much harder than its node-weighted version. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> In this paper we present a modification of a technique by Chiba and Nishizeki [Chiba and Nishizeki: Arboricity and Subgraph Listing Algorithms, SIAM J. Comput. 14(1), pp. 210--223 (1985)]. Based on it, we design a data structure suitable for dynamic graph algorithms. We employ the data structure to formulate new algorithms for several problems, including counting subgraphs of four vertices, recognition of diamond-free graphs, cop-win graphs and strongly chordal graphs, among others. We improve the time complexity for graphs with low arboricity or h-index. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> We present a general technique for detecting and counting small subgraphs. It consists of forming special linear combinations of the numbers of occurrences of different induced subgraphs of fixed size in a graph. These combinations can be efficiently computed by rectangular matrix multiplication. Our two main results utilizing the technique are as follows. Let $H$ be a fixed graph with $k$ vertices and an independent set of size $s.$ 1. Detecting if an $n$-vertex graph contains a (not necessarily induced) subgraph isomorphic to $H$ can be done in time $O(n^{\omega(\lceil (k-s)/2 \rceil, 1, \lfloor (k-s)/2 \rfloor )})$, where $\omega (p,q,r)$ is the exponent of fast arithmetic matrix multiplication of an $n^p\times n^q$ matrix by an $n^q\times n^r$ matrix. 2. When $s=2,$ counting the number of (not necessarily induced) subgraphs isomorphic to $H$ can be done in the same time, i.e., in time $O(n^{\omega(\lceil (k-2)/2 \rceil, 1, \lfloor (k-2)/2 \rfloor )}).$ It follows in particular that we can count the nu... <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> Graphs are extremely versatile and ubiquitous mathematical structures with potential to model a wide range of domains. For this reason, graph problems have been of interest since the early days of computer science. Some of these problems consider substructures of a graph that have certain properties. These substructures of interest, generally called patterns, are often meaningful in the domain being modeled. Classic examples of patterns include spanning trees, cycles and subgraphs. ::: This thesis focuses on the topic of explicitly listing all the patterns existing in an input graph. One of the defining features of this problem is that the number of patterns is frequently exponential on the size of the input graph. Thus, the time complexity of listing algorithms is parameterized by the size of the output. ::: The main contribution of this work is the presentation of optimal algorithms for four different problems of listing patterns in graphs, namely the listing of k-subtrees, k-subgraphs, st-paths and cycles. The algorithms presented are framed within the same generic approach, based in a recursive partition of the search space that divides the problem into subproblems. The key to an efficient implementation of this approach is to avoid recursing into subproblems that do not list any patterns. With this goal in sight, a dynamic data structure, called the certificate, is introduced and maintained throughout the recursion. Moreover, properties of the recursion tree and lower bounds on the number of patterns are used to amortize the cost of the algorithm on the size of the output. <s> BIB007
Even though the focus of this work is to look at the proposed practical algorithms, it is important to note that some of the existing work drew inspiration from numerous more theoretical-oriented works. Thus, it is of relevance to briefly summarize some of the achievements in this area and we will do so with a special interest in those that directly influenced some of the algorithms discussed in this section. The first interest in subgraph counting stemmed from the world of enumeration algorithms. The book "Enumeration in Graphs" surveyed several methods to enumerate several different structures in a graph, such as cycles, trees or cliques. Even though these are specific subpatterns, they often represent the fundamental computation that needs to be done in order to enumerate any subgraph. These ideas were translated into works that count subgraphs by efficiently enumerating simpler substructures like these BIB002 BIB001 . Approximation schemes can also be developed with this in mind, which approximates the frequency of several subgraph families like cycles or paths and then generalize these for all size 4 subgraphs BIB003 . Another example of an initially purely theoretical technique is the work by Kowaluk et al. BIB006 , which was one of the inspirations for the multitude of matrix based analytic algorithms for counting subgraphs. In fact, the most efficient algorithms are based on several theoretical foundations that allow a tighter analysis of runtime. Due to this interplay, it is worth mentioned a few more recent papers on subgraph counting and enumerating. There is an interest in finding efficient algorithms that are parameterized or sensitive to certain properties of the graph, such as independent sets BIB004 or its maximum degree . Another current interest is in counting and enumerating subgraphs in a dynamic or online environment BIB005 . Finally, another active theoretical topic is to find optimal algorithms for enumeration, as in BIB007 , as well as proving lower bounds on their time complexity, as Björklund et al. does for triangle listing.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Randomised Enumeration <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Randomised Enumeration <s> Determining the frequency of small subgraphs is an important graph mining primitive. One major class of algorithms for this task is based upon the enumeration of all sets of \(k\) connected nodes. These are known as network-centric algorithms. FAst Subgraph Enumeration (FaSE) is a exact algorithm for subgraph counting that contrasted with its past approaches by performing the isomorphism tests while doing the enumeration, encapsulating the topological information in a g-trie and thus largely reducing the number of required isomorphism tests. Our goal with this paper is to expand this approach by providing an approximate algorithm, which we called Rand-FaSE. It uses an unbiased sampling estimator for the number of subgraphs of each type, allowing an user to trade some accuracy for even faster execution times. We tested our algorithm on a set of representative complex networks, comparing it with the exact alternative, FaSE. We also do an extensive analysis by studying its accuracy and speed gains against previous sampling approaches. With all of this, we believe FaSE and Rand-FaSE pave the way for faster network-centric census algorithms. <s> BIB002
These algorithms are adaptations of older enumeration algorithms that perform exact counting. They have the particularity that they all induce a tree-like search space in the computation, where the leaves are the subgraph occurrences, and thus perform the approximation in a similar manner. Each level of the search tree is assigned a value, p i , which denotes the probability of transitioning from parent node to the child node in the tree. In this scheme, each leaf in this tree is reachable with probability P = k i=1 p i and the frequency of each subgraph is estimated using the number of samples obtained of that subgraph divided by P. Figure 5 illustrates how probabilities are added to the search tree. In this specific example, which could be equivalent to searching subgraphs of size 4, the first two levels of the tree have probability 100%, so their successors are all explored. On the other hand, in the last two levels, the probability of exploring a node in the tree is only 80%, therefore some nodes, marked as grey, are not visited. The first algorithm to implement this strategy was RAND-ESU by Wernicke BIB001 , an approximate version of ESU (described in Section 3.1.1). Recall that ESU maintains two sets V S and V E , the set of vertices in the subgraph and the set of candidate vertices for extending the subgraph. When adding a vertex from V E to V S , this vertex is added with probability p |V S | , where |V S | is the depth of the search tree. Using the more efficient g-trie data structure, Ribeiro and Silva proposed RAND-GTrie and Paredes and Ribeiro BIB002 proposed RAND-FaSE. Each level of the g-trie is assigned a probability, p i . When adding a new vertex to a subgraph of size d, corresponding to depth d in the g-trie, this is done with probability p d .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Majority of the existing works on network analysis study properties that are related to the global topology of a network. Examples of such properties include diameter, power-law exponent, and spectra of graph Laplacian. Such works enhance our understanding of real-life networks, or enable us to generate synthetic graphs with real-life graph properties. However, many of the existing problems on networks require the study of local topological structures of a network, which did not get the deserved attention in the existing works. In this work, we use graphlet frequency distribution (GFD) as an analysis tool for understanding the variance of local topological structure in a network; we also show that it can help in comparing, and characterizing real-life networks. The main bottleneck to obtain GFD is the excessive computation cost for obtaining the frequency of each of the graphlets in a large network. To overcome this, we propose a simple, yet powerful algorithm, called Graft , that obtains the approximate graphlet frequency for all graphlets that have up-to five vertices. Comparing to an exact counting algorithm, our algorithm achieves a speedup factor between 10 and 100 for a negligible counting error, which is, on average, less than 5 percent. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> We study the problem of approximating the 3-profile of a large graph. 3-profiles are generalizations of triangle counts that specify the number of times a small graph appears as an induced subgraph of a large graph. Our algorithm uses the novel concept of 3-profile sparsifiers: sparse graphs that can be used to approximate the full 3-profile counts for a given large graph. Further, we study the problem of estimating local and ego 3-profiles, two graph quantities that characterize the local neighborhood of each vertex of a graph. Our algorithm is distributed and operates as a vertex program over the GraphLab PowerGraph framework. We introduce the concept of edge pivoting which allows us to collect 2-hop information without maintaining an explicit 2-hop neighborhood list at each vertex. This enables the computation of all the local 3-profiles in parallel with minimal communication. We test our implementation in several experiments scaling up to 640 cores on Amazon EC2. We find that our algorithm can estimate the 3-profile of a graph in approximately the same time as triangle counting. For the harder problem of ego 3-profiles, we introduce an algorithm that can estimate profiles of hundreds of thousands of vertices in parallel, in the timescale of minutes. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Recently exploring locally connected subgraphs (also known as motifs or graphlets) of complex networks attracts a lot of attention. Previous work made the strong assumption that the graph topology of interest is known in advance. In practice, sometimes researchers have to deal with the situation where the graph topology is unknown because it is expensive to collect and store all topological information. Hence, typically what is available to researchers is only a snapshot of the graph, i.e., a subgraph of the graph. Crawling methods such as breadth first sampling can be used to generate the snapshot. However, these methods fail to sample a streaming graph represented as a high speed stream of edges. Therefore, graph mining applications such as network traffic monitoring usually use random edge sampling (i.e., sample each edge with a fixed probability) to collect edges and generate a sampled graph, which we call a “ RESampled graph ”. Clearly, a RESampled graph's motif statistics may be quite different from those of the original graph. To resolve this, we propose a framework Minfer, which takes the given RESampled graph and accurately infers the underlying graph's motif statistics. Experiments using large scale datasets show the accuracy and efficiency of our method. <s> BIB005
The general idea of these algorithms is to perform an exact count on a smaller network that was obtained from the original one (e.g., a sample, or a compressed network). From the frequencies of each subgraph in the smaller network, the frequencies in the original network are estimated. Algorithms vary on (i) how the smaller network is obtained and on (ii) which estimator they use. The first example of an algorithm in this category is Targeted Node Processing (TNP) by Pržulj et al. . This algorithm is specially tailored for protein-protein interaction ,that, according to the authors, have a periphery that is sparser than the more central parts of the network. Using this information, it performs an exact count of the subgraphs in the periphery of the network and uses their frequencies to estimate the frequencies in the rest of the network. The authors claim that, due to the uniformity of the aforementioned networks, the distribution of the subgraphs in the fringe is representative of the distribution in the rest of the network. SCMD by Wang et al. BIB001 (already covered in Section 3.1.3) allows the use of any approximate counting method in the compressed graph. There is no guarantee that subgraphs are counted uniformly in the compressed graph, introducing a bias that needs to be corrected. The authors give the example of this bias when using their method in conjunction with RAND-ESU. If each leaf (subgraph) of depth k in the search tree is reached with probability P and a specific subgraph in the compressed graph is sampled with probability ρ, then, to correct the sampling bias, the probability of decompressing the relevant k-subgraph is P/ρ. In GRAFT, Rahman et al. BIB002 provide a strategy for counting undirected graphlets of size up to 5, using edge sampling. The algorithm starts by picking an edge e д from each of the 29 graphlets and a set of edges sampled from the graph S, without replacement. For each edge e ∈ S and for each graphlet д, the frequency of д is calculated such that e has the same position in д as e д (e is said to be aligned with e д ). These frequencies are summed for all edges and divided by a normalising factor, based on the automorphisms of each graphlet, which becomes the estimation for the frequency of that graphlet in the whole network. Note that if S is equal to E(G), the algorithm outputs an exact answer. Elenberg et al. create estimators for the frequency of size 3 BIB003 and 4 BIB004 subgraphs. A major difference from this work to previous ones is that Elenberg et al. estimate the frequencies of subgraphs that are not connected, besides the usual connected ones. The authors start by removing each edge from the network with a certain probability and computing the exact counts in this "sub-sampled" network. Then, they craft a set of linear equations that relate the exact counts on this smaller network to the ones of the original network. Using these equations, the estimation of the frequency of the subgraphs in the original network follows. Wang et al. BIB005 introduce an algorithm that aims to estimate the subgraph concentrations of a network when only a fraction of its edges are known. They call this a "RESampled Graph", obtained from the real network through random edge sampling, a common scenario on applications such as network traffic analysis. A key aspect of this algorithm is the number of non-induced subgraphs of a size k graphlet that are isomorphic to another size k graphlet, an example of this calculation can be found in Table 5 . Using this number and the proportion of edges sampled to form the smaller network, the authors compute the probability that a subgraph in the "RESampled Graph" is isomorphic to another subgraph in the original graph. Then, an exact counting algorithm is applied to the "RESampled Graph" and by composing the results from this algorithm with the aforementioned probability, the subgraph concentrations in the original network are estimated.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Path Sampling <s> Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. Indeed, even a highly tuned enumeration code takes more than a day on a graph with millions of edges. Most previous work that runs for truly massive graphs employ clusters and massive parallelization. We provide a sampling algorithm that provably and accurately approximates the frequencies of all 4-vertex pattern subgraphs. Our algorithm is based on a novel technique of 3-path sampling and a special pruning scheme to decrease the variance in estimates. We provide theoretical proofs for the accuracy of our algorithm, and give formal bounds for the error and confidence of our estimates. We perform a detailed empirical study and show that our algorithm provides estimates within 1% relative error for all subpatterns (over a large class of test graphs), while being orders of magnitude faster than enumeration and other sampling based algorithms. Our algorithm takes less than a minute (on a single commodity machine) to process an Orkut social network with 300 million edges. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Path Sampling <s> Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between graphs and graph classification applications such as protein function prediction and malware detection. However, it is challenging to compute these graphlet counts for a large graph or a large set of graphs due to the combinatorial nature of the problem. Despite recent efforts in counting 3-node and 4-node graphlets, little attention has been paid to characterizing 5-node graphlets. In this paper, we develop a computationally efficient sampling method to estimate 5-node graphlet counts. We not only provide a fast sampling method and unbiased estimators of graphlet counts, but also derive simple yet exact formulas for the variances of the estimators which are of great value in practice—the variances can be used to bound the estimates’ errors and determine the smallest necessary sampling budget for a desired accuracy. We conduct experiments on a variety of real-world datasets, and the results show that our method is several orders of magnitude faster than the state-of-the-art methods with the same accuracy. <s> BIB002
This family of algorithms relies on the idea of sampling path subgraphs to estimate the frequencies of the other subgraphs. Path subgraphs are composed by 2 exterior nodes and k − 2 interior nodes (where k is the size of the subgraph) arranged in a single line; the interior nodes all have degree of 2, while the exterior nodes have degree of 1. Examples of these are the subgraphs G 1 , G 3 and G 9 in Figure 6 . The main idea for these algorithms, mainly for k ≥ 4, is relating the number of non-induced occurrences of each subgraph of size k in the other size k subgraph. For example, when k = 4, there are 4 non-induced occurrences of G 3 in G 5 or 12 non-induced occurrences of G 3 in G 8 . Seshadhri et al. introduced the idea of wedge sampling, where wedges denote size 3 path subgraphs. The premise of the algorithm is simple, they select a number of wedges uniformly at random and check whether they are closed or not. The fraction of closed wedges sampled is an estimation from for the clustering coefficient, from which the number of triangles can be derived. Building on the idea of wedge sampling, Jha et al. BIB001 propose path sampling to estimate the frequency of size 4 graphlets. The main primitive of the algorithm is sampling non-induced occurrences of G 3 and determining which graphlet is induced by that sample. The estimator relies on both the number of induced subgraphs counted via the sampling and information contained in Table 5 . Finally, the authors determine an equation to count the number of stars with 4 nodes (G 4 ) based on the frequencies of each other graphlet, since G 4 does not have any non-induced occurrence of G 3 . Applying the same concepts to size 5 subgraphs, Wang et al. BIB002 present MOSS-5. For size 5, sampling paths is not enough to estimate the frequencies of all different subgraphs, as there are 3 subgraphs that do not have a non-induced occurrence of a path: G 10 , G 11 and G 14 . On the other hand, G 11 does not have a non-induced occurrence in 3 subgraphs as well (G 9 , G 10 and G 15 ). Using this knowledge, the authors create an algorithm divided in two parts: first it samples non-induced size 5 paths (G 9 ), similarly to Jha et al. BIB001 , and then they repeat the procedure but sampling occurrences of G 11 instead. Combining the results from these two sampling schemes, the authors are able to estimate the frequency of every size 5 subgraph. To the best of our knowledge, MOSS-5 is the algorithm that achieves the best trade-off of accuracy and time to estimate the frequency of 5-subgraphs, as it is able to reach very small errors (magnitude 10 −2 ) with a very limited number of samples, even for big networks. However the ideas behind MOSS-5 are not easily extendable to directed subgraphs and larger sized undirected subgraphs due to the ever increasing number of dependencies between the number of non-induced occurrences, making it harder to use the information contained in a table similar to Table 5 for these cases.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Summary: Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns that occur much more frequently than in randomized networks with the same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively enumerating all subgraphs with a given number of nodes in the network. The runtime of such algorithms increases strongly with network size. Here, we present a novel algorithm that allows estimation of subgraph concentrations and detection of network motifs at a runtime that is asymptotically independent of the network size. This algorithm is based on random sampling of subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in larger networks than was previously possible with exhaustive enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions. ::: ::: Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 1.1) and further information is available at http://www.weizmann.ac.il/mcb/UriAlon/ <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Graphlet frequency distribution (GFD) has recently become popular for characterizing large networks. However, the computation of GFD for a network requires the exact count of embedded graphlets in that network, which is a computationally expensive task. As a result, it is practically infeasible to compute the GFD for even a moderately large network. In this paper, we propose GUISE, which uses a Markov Chain Monte Carlo (MCMC) sampling method for constructing the approximate GFD of a large network. Our experiments on networks with millions of nodes show that GUISE obtains the GFD within few minutes, whereas the exhaustive counting based approach takes several days. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Exploring statistics of locally connected subgraph patterns (also known as network motifs) has helped researchers better understand the structure and function of biological and Online Social Networks (OSNs). Nowadays, the massive size of some critical networks—often stored in already overloaded relational databases—effectively limits the rate at which nodes and edges can be explored, making it a challenge to accurately discover subgraph statistics. In this work, we propose sampling methods to accurately estimate subgraph statistics from as few queried nodes as possible. We present sampling algorithms that efficiently and accurately estimate subgraph properties of massive networks. Our algorithms require no precomputation or complete network topology information. At the same time, we provide theoretical guarantees of convergence. We perform experiments using widely known datasets and show that, for the same accuracy, our algorithms require an order of magnitude less queries (samples) than the current state-of-the-art algorithms. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Scientists have shown that network motifs are key building block of various biological networks. Most of the existing exact methods for finding network motifs are inefficient simply due to the inherent complexity of this task. In recent years, researchers are considering approximate methods that save computation by sacrificing exact counting of the frequency of potential motifs. However, these methods are also slow when one considers the motifs of larger size. In this work, we propose two methods for approximate motif finding, namely SRW-rw, and MHRW based on Markov Chain Monte Carlo (MCMC) sampling. Both the methods are significantly faster than the best of the existing methods, with comparable or better accuracy. Further, as the motif size grows the complexity of the proposed methods grows linearly. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Algorithms for mining very large graphs, such as those representing online social networks, to discover the relative frequency of small subgraphs within them are of high interest to sociologists, computer scientists and marketeers alike. However, the computation of these network motif statistics via naive enumeration is infeasible for either its prohibitive computational costs or access restrictions on the full graph data. Methods to estimate the motif statistics based on random walks by sampling only a small fraction of the subgraphs in the large graph address both of these challenges. In this paper, we present a new algorithm, called the Waddling Random Walk (WRW), which estimates the concentration of motifs of any size. It derives its name from the fact that it sways a little to the left and to the right, thus also sampling nodes not directly on the path of the random walk. The WRW algorithm achieves its computational efficiency by not trying to enumerate subgraphs around the random walk but instead using a randomized protocol to sample subgraphs in the neighborhood of the nodes visited by the walk. In addition, WRW achieves significantly higher accuracy (measured by the closeness of its estimate to the correct value) and higher precision (measured by the low variance in its estimations) than the current state-of-the-art algorithms for mining subgraph statistics. We illustrate these advantages in speed, accuracy and precision using simulations on well-known and widely used graph datasets representing real networks. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Graphlets are induced subgraph patterns and have been frequently applied to characterize the local topology structures of graphs across various domains, e.g., online social networks (OSNs) and biological networks. Discovering and computing graphlet statistics are highly challenging. First, the massive size of real-world graphs makes the exact computation of graphlets extremely expensive. Secondly, the graph topology may not be readily available so one has to resort to web crawling using the available application programming interfaces (APIs). In this work, we propose a general and novel framework to estimate graphlet statistics of "any size". Our framework is based on collecting samples through consecutive steps of random walks. We derive an analytical bound on the sample size (via the Chernoff-Hoeffding technique) to guarantee the convergence of our unbiased estimator. To further improve the accuracy, we introduce two novel optimization techniques to reduce the lower bound on the sample size. Experimental evaluations demonstrate that our methods outperform the state-of-the-art method up to an order of magnitude both in terms of accuracy and time cost. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Mining graphlet statistics is very meaningful due to its wide applications in social networks, bioinformatics and information security, etc. However, it is a big challenge to exactly count graphlet statistics as the number of subgraphs exponentially increases with the graph size, so sampling algorithms are widely used to estimate graphlet statistics within reasonable time. However, existing sampling algorithms are not scalable for large graphlets, e.g., they may get stuck when estimating graphlets with more than five nodes. To address this issue, we propose a highly scalable algorithm, Scalable subgraph Sampling via Random Walk (SSRW), for graphlet counts and concentrations. SSRW samples graphlets by generating new nodes from the neighbors of previously visited nodes instead of fixed ones. Thanks to this flexibility, we can generate any k-graphlets in a unified way and estimate statistics of k-graphlet efficiently even for large k. Our extensive experiments on estimating counts and concentrations of \(\{4,5,6,7\}\)-graphlets show that SSRW algorithm is scalable, accurate and fast. <s> BIB007
A random walk in a graph G is a sequence of nodes, R, of the form R = (n 1 , n 2 , . . .), where n 1 is the seed node and n i the ith node visited in the walk. A random walk can also be seen as a Markov chain. We identify two main approaches to sample subgraphs using random walks. The first is incrementing the size of the walk until a sequence of k distinct nodes is drawn, forming a k-subgraph, which is then identified by an isomorphism test. The second approach is considering a graph of relationships between subgraphs, where two subgraphs are connected if one can be obtained from the other by adding or removing a node or by adding or removing an edge. A random walk is then performed on this graph instead of on the original one. Kashtan et al. BIB001 , in their seminal work commonly called ESA (Edge Sampling), implemented one of the first subgraph sampling methods in the MFinder software. The authors propose to do a random walk on the graph, sampling one edge at a time until a set of k nodes is found, from which the subgraph induced by that set of nodes is discovered. This method resulted in a biased estimator. To correct the bias, the authors propose to re-weight the sample, which takes exponential time in the size of the subgraphs. Bhuiyan et al. BIB002 develop GUISE that computes the graphlet degree distribution for subgraphs of size 3, 4 and 5 in undirected networks. The algorithm is based on Monte Carlo Markov Chain (MCMC) sampling. It works by sampling a seed graphlet, calculating its neighbourhood (a set of other graphlets), picking one randomly and calculating an acceptance probability to transition to this new graphlet. This process is then repeated until a predefined number of samples is taken from the graph. The neighbourhood of a graphlet is similar to the graph of relationships previously mentioned, but to obtain a k-graphlet from another k-graphlet, a node from the original one is removed and, if the remaining k − 1 nodes are connected, their adjacency lists are concatenated and nodes are picked from there to form the new k-graphlet. A similar approach to GUISE is used by Saha and Al Hasan BIB004 , where MCMC sampling is also used to compute subgraph concentration. A difference to GUISE is that the size of graphlets is theoretically unbound and only a specific size k is counted, whereas GUISE counts graphlets of size 3, 4 and 5 simultaneously. They also suggest a modified version where the acceptance probability is always one (that is, there is always a transition to the new subgraph), which introduces a bias towards graphlets with nodes with a high degree. In turn, they propose an estimator that re-weights the concentration to remove this bias. Wang et al. BIB003 propose a random walk based method to estimate subgraph concentrations that aims to improve on the approach taken by GUISE. The main improvement over GUISE is that no samples are rejected, avoiding a cost of sampling without any gain of information. The authors use a graph of relationships between connected induced subgraphs, where two k-subgraphs are connected if they share k − 1 nodes, but this graph is not explicitly built, reducing memory costs. The basic algorithm is just a simple random walk over this graph of relationships. The authors also present two improvements: Pairwise Subgraph Random Walk (PSRW), estimates size k subgraph by looking at the graph of relationships composed by k − 1-subgraphs; Mixed Subgraph Sampling (MSS), estimates subgraphs of size k − 1, k and k + 1 simultaneously. Han and Sethu BIB005 present an algorithm to estimate subgraph concentration based on random walks. Their algorithm, Waddling Random Walk (WRW), gets its name from how the random walk is performed, allowing sampling of nodes not only on the path of the walk, but also query random nodes in the neighbourhood. Let l be the number of vertices (with repetition) in the shortest path of a particular k-graphlet. The goal of the waddling is to reduce the number of steps the walk has to take to identify graphlets with l > k. While executing a random walk to identify a k-subgraph, the waddling approach limits the number of nodes explored to the size of the subgraph, k. Chen and Lui propose a random walk based algorithm to estimate graphlet counts in online social networks, which are often restricted and the entire topology is hidden behind a prohibitive query cost. With this context in mind, the authors introduced the concepts of touched and visible subgraphs. The former are subgraphs composed of vertices whose neighbourhood is accessible. The latter possess one and only one vertex with inaccessible neighbourhood. Their method, IMPR, works by generating k − 1-node touched subgraphs via random walk and combining them with their node's neighbourhood for obtain k-node visible subgraphs, which form the k-node samples. Chen et al. BIB006 introduce a new framework that incorporates PSRW as a special case. To sample k-subgraphs, the authors also use a graph of relationships between connected induced d-subgraphs, d ∈ {1, .., k − 1}, and perform a random walk over this graph. The difference to PSRW is that PSRW only uses d as k − 1, which becomes ineffective as k grows to larger sizes. The authors also augment this method of sampling with a different re-weight coefficient to improve estimation accuracy and add non-backtracking random walks, which eliminates invalid states in the Markov Chain that do not contribute to the estimation. Yang et al. BIB007 introduce another algorithm using random walks, Scalable subgraph Sampling via Random Walk (SSRW), able to compute both frequencies and concentrations of undirected subgraphs of size up to 7. The next nodes in the random walk are picked from the concatenation of the neighbourhoods of all nodes previously selected to be a part of the sampled subgraph. The authors present an unbiased estimator and compare it against Chen et al. BIB006 and Han and Sethu BIB005 , getting better results than both for the single network tested.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Identifying motifs (or commonly occurring subgraphs/templates) has been found to be useful in a number of applications, such as biological and social networks; they have been used to identify building blocks and functional properties, as well as to characterize the underlying networks. Enumerating subgraphs is a challenging computational problem, and all prior results have considered networks with a few thousand nodes. In this paper, we develop a parallel subgraph enumeration algorithm, ParSE, that scales to networks with millions of nodes. Our algorithm is a randomized approximation scheme, that estimates the subgraph frequency to any desired level of accuracy, and allows enumeration of a class of motifs that extends those considered in prior work. Our approach is based on parallelization of an approach called color coding, combined with a stream based partitioning. We also show that ParSE scales well with the number of processors, over a large range. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Relational sub graph analysis, e.g. finding labeled sub graphs in a network, which are isomorphic to a template, is a key problem in many graph related applications. It is computationally challenging for large networks and complex templates. In this paper, we develop SAHAD, an algorithm for relational sub graph analysis using Hadoop, in which the sub graph is in the form of a tree. SAHAD is able to solve a variety of problems closely related with sub graph isomorphism, including counting labeled/unlabeled sub graphs, finding supervised motifs, and computing graph let frequency distribution. We prove that the worst case work complexity for SAHAD is asymptotically very close to that of the best sequential algorithm. On a mid-size cluster with about 40 compute nodes, SAHAD scales to networks with up to 9 million nodes and a quarter billion edges, and templates with up to 12 nodes. To the best of our knowledge, SAHAD is the first such Hadoop based subgraph/subtree analysis algorithm, and performs significantly better than prior approaches for very large graphs and templates. Another unique aspect is that SAHAD is also amenable to running quite easily on Amazon EC2, without needs for any system level optimization. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> We present a new shared-memory parallel algorithm and implementation called FASCIA for the problems of approximate sub graph counting and sub graph enumeration. The problem of sub graph counting refers to determining the frequency of occurrence of a given sub graph (or template) within a large network. This is a key graph analytic with applications in various domains. In bioinformatics, sub graph counting is used to detect and characterize local structure (motifs) in protein interaction networks. Exhaustive enumeration and exact counting is extremely compute-intensive, with running time growing exponentially with the number of vertices in the template. In this work, we apply the color coding technique to determine approximate counts of non-induced occurrences of the sub graph in the original network. Color coding gives a fixed-parameter algorithm for this problem, using a dynamic programming-based counting approach. Our new contributions are a multilevel shared-memory parallelization of the counting scheme and several optimizations to reduce the memory footprint. We show that approximate counts can be obtained for templates with up to 12 vertices, on networks with up to millions of vertices and edges. Prior work on this problem has only considered out-of-core parallelization on distributed platforms. With our new counting scheme, data layout optimizations, and multicore parallelism, we demonstrate a significant speedup over the current state-of-the-art for sub graph counting. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Counting graphlets is a well-studied problem in graph mining and social network analysis. Recently, several papers explored very simple and natural algorithms based on Monte Carlo sampling of Markov Chains (MC), and reported encouraging results. We show, perhaps surprisingly, that such algorithms are outperformed by color coding (CC) [2], a sophisticated algorithmic technique that we extend to the case of graphlet sampling and for which we prove strong statistical guarantees. Our computational experiments on graphs with millions of nodes show CC to be more accurate than MC; furthermore, we formally show that the mixing time of the MC approach is too high in general, even when the input graph has high conductance. All this comes at a price however. While MC is very efficient in terms of space, CC’s memory requirements become demanding when the size of the input graph and that of the graphlets grow. And yet, our experiments show that CC can push the limits of the state-of-the-art, both in terms of the size of the input graph and of that of the graphlets. <s> BIB004
The technique of colour coding has been adapted to the problem of approximating subgraph frequencies by Zhao et al. BIB001 , Zhao et al. BIB002 and Slota and Madduri BIB003 . However, all these works focus on specific categories of subgraphs, for example, SAHad BIB002 attempts to only find subgraphs that are in the form of a tree. More recently, Bressan et al. BIB004 present a general algorithm using colour coding, that works for any undirected subgraph of size theoretically unbound. The algorithm works in two phases. The first, based on the original description of , is counting the number of non-induced trees, treelets, in the graph but with a particularity, the nodes were previously partitioned into k sets and attributed a label (a colour). These treelets then must be constituted solely of nodes with different colours. This part of the algorithm outputs counters C(T , S, v), for every v ∈ V (G), which are the number of treelets rooted in v isomorphic to T , whose colours span the colour set S. The second phase of the algorithm is the sampling part, which is focused on sampling treelets uniformly at random. To pick a treelet with k nodes, the authors choose a random node v, a treelet T with probability proportional to C (T , [k] , v) and then pick one of the treelets that is rooted in v, is isomorphic to T and is coloured by [k] . Given a treelet T k , the authors consider the graphlet G k induced by the nodes of T k and increment its frequency by 1 σ (G k ) , where σ (G k ) is the number of spanning trees of G k .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. ::: ::: Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. ::: ::: Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> The study of biological networks and network motifs can yield significant new insights into systems biology. Previous methods of discovering network motifs - network-centric subgraph enumeration and sampling - have been limited to motifs of 6 to 8 nodes, revealing only the smallest network components. New methods are necessary to identify larger network sub-structures and functional motifs. ::: ::: Here we present a novel algorithm for discovering large network motifs that achieves these goals, based on a novel symmetry-breaking technique, which eliminates repeated isomorphism testing, leading to an exponential speed-up over previous methods. This technique is made possible by reversing the traditional network-based search at the heart of the algorithm to a motif-based search, which also eliminates the need to store all motifs of a given size and enables parallelization and scaling. Additionally, our method enables us to study the clustering properties of discovered motifs, revealing even larger network elements. ::: ::: We apply this algorithm to the protein-protein interaction network and transcription regulatory network of S. cerevisiae, and discover several large network motifs, which were previously inaccessible to existing methods, including a 29-node cluster of 15-node motifs corresponding to the key transcription machinery of S. cerevisiae. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> We introduce GPUMiner, a novel parallel data mining system that utilizes new-generation graphics processing units (GPUs). Our system relies on the massively multi-threaded SIMD (Single Instruction, Multiple-Data) architecture provided by GPUs. As specialpurpose co-processors, these processors are highly optimized for graphics rendering and rely on the CPU for data input/output as well as complex program control. Therefore, we design GPUMiner to consist of the following three components: (1) a CPU-based storage and buffer manager to handle I/O and data transfer between the CPU and the GPU, (2) a GPU-CPU co-processing parallel mining module, and (3) a GPU-based mining visualization module. We design the GPU-CPU co-processing scheme in mining depending on the complexity and inherent parallelism of individual mining algorithms. We provide the visualization module to facilitate users to observe and interact with the mining process online. We have implemented the k-means clustering and the Apriori frequent pattern mining algorithms in GPUMiner. Our preliminary results have shown significant speedups over state-of-the-art CPU implementations on a PC with a G80 GPU and a quad-core CPU. We will demonstrate the mining process through our visualization module. Code and documentation of GPUMiner are available at http://code.google.com/p/gpuminer/. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Graphs are a fundamental data representation that has been used extensively in various domains. In graph-based applications, a systematic exploration of the graph such as a breadth-first search (BFS) often serves as a key component in the processing of their massive data sets. In this paper, we present a new method for implementing the parallel BFS algorithm on multi-core CPUs which exploits a fundamental property of randomly shaped real-world graph instances. By utilizing memory bandwidth more efficiently, our method shows improved performance over the current state-of-the-art implementation and increases its advantage as the size of the graph increases. We then propose a hybrid method which, for each level of the BFS algorithm, dynamically chooses the best implementation from: a sequential execution, two different methods of multicore execution, and a GPU execution. Such a hybrid approach provides the best performance for each graph size while avoiding poor worst-case performance on high-diameter graphs. Finally, we study the effects of the underlying architecture on BFS performance by comparing multiple CPU and GPU systems, a high-end GPU system performed as well as a quad-socket high-end CPU system. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management constructed from efficient prefix sum that achieves an asymptotically optimal O(|V|+|E|) work complexity. Our implementation delivers excellent performance on diverse graphs, achieving traversal rates in excess of 3.3 billion and 8.3 billion traversed edges per second using single and quad-GPU configurations, respectively. This level of performance is several times faster than state-of-the-art implementations both CPU and GPU platforms. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Processing large complex networks like social networks or web graphs has recently attracted considerable interest. In order to do this in parallel, we need to partition them into pieces of about equal size. Unfortunately, previous parallel graph partitioners originally developed for more regular mesh-like networks do not work well for these networks. This paper addresses this problem by parallelizing and adapting the label propagation technique originally developed for graph clustering. By introducing size constraints, label propagation becomes applicable for both the coarsening and the refinement phase of multilevel graph partitioning. We obtain very high quality by applying a highly parallel evolutionary algorithm to the coarsened graph. The resulting system is both more scalable and achieves higher quality than state-of-the-art systems like ParMetis or PT-Scotch. For large complex networks the performance differences are very big. For example, our algorithm can partition a web graph with 3.3 billion edges in less than sixteen seconds using 512 cores of a high performance cluster while producing a high quality partition -- none of the competing systems can handle this graph on our system. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB017
One key aspect necessary to achieve a scalable parallel computation is finding a balanced work division (i.e., splitting work-units evenly between workers -parallel processors/threads). A naive possibility for subgraph counting is to assign |V (G) | |P | nodes from network G to each worker p ∈ P. This egalitarian division is a poor choice since two nodes induce very different search spaces; for instance, hub-like nodes induce many more subgraph occurrences than nearly-isolated nodes. Instead of performing an egalitarian division, Wang et al. BIB002 discriminate nodes by their degree and distribute them among workers, the idea being that each worker gets roughly the same amount of hard and easy work-units. Despite achieving a more balanced division than the naive version, there is still no guarantee that the node-degree is sufficient to determine the actual complexity of the work-unit. Distributing work immediately (without runtime adjustments) is called a static division. Wang et al. did not assess scalability in BIB002 , but they showed that their parallel algorithm was faster than Mfinder in an E. Coli transcriptional regulation network. Since their method was not named, we refer to it as ParWang henceforth. The first parallel strategy with a single-subgraph-search algorithm at its core, namely Grochow BIB004 , was by Schatz et al. . Since the algorithm was not named, and it targets a distributed memory (DM) architecture (i.e., parallel cluster), we refer to it as DM-Grochow. In order to distribute query subgraphs (also called isoclasses) among workers they employed two strategies: naive and first-fit. The naive strategy is similar to ParWang's. In the first-fit model, each slave processor requests a subgraph type (or isoclass) from the master and enumerates all occurrences of that type (e.g., cliques, stars, chains). This division is dynamic, as opposed to static, but it is not balanced since different isoclasses induce very different search trees. For instance, in sparse networks k-cliques are faster to compute than k-chains. Using 64 cores, Schatz et al. obtained ≈10-15x speedups over the sequential version on a yeast PPI network. They also tried another novel approach by partitioning the network instead of partitioning the subgraph-set. However, finding adequate partitions for subgraph counting is a very hard problem due to partition overlaps and subgraphs traversing different partitions, and no speedup was obtained using this strategy. We should note that parallel graph partitioning remains an active research problem to this day BIB016 , but is out of the scope of this work. All parallel algorithms mentioned so far traverse occurrences in a depth-first (DFS) fashion, since doing so avoids having to store intermediate states. By contrast, Liu et al. BIB006 use a breadthfirst search (BFS) where, at each step, all subgraph occurrences found in the previous one are expanded by one node. Their algorithm, MPRF, is implemented following a MapReduce model BIB001 which is intrinsically a BFS-like framework. In MPRF, mappers extend size k occurrences to size k + 1 and reducers remove repeated occurrences. At each BFS-level, MPRF divides work-units evenly among workers. We still consider this to be a static division since no adjustments are made in runtime. Thus, in our terminology, static divisions can be performed only once (at the start of computation in DFS-like algorithms) or multiple times (once per level in BFS-like algorithms). Overhead caused by reading and writing to files reduces MRPF's efficiency, but the authors report speedups of ≈ 7x on a 48-node cluster, when compared to the execution on a single-processor. DFS-based algorithms discussed so far either perform a complete work-division right at the beginning (ParWang), or they perform a partial work-division at the beginning and then workers request work when idle (DM-Grochow). In both cases, a worker has to finish a work-unit before proceeding a new one. Therefore, it is possible that a worker gets stuck processing a very computationally heavy work-unit while all the others are idle. This has to do with work-unit granularity: work-units at the top of the DFS search space have high (coarse) granularity since the algorithm has to explore a large search space. BFS-based algorithms mitigate this problem because work-units are much more fine grained (usually a worker only extends his work-unit(s) by one node). The work by Ribeiro et al. was the first to implement work sharing during parallel subgraph counting, alleviating the problem of coarse work-unit granularity of DFS-based subgraph counting algorithms. Workers have a splitting threshold that dictates how likely it is to, instead of fully processing a work-unit, putting part of it in a global work queue. A work-unit is divided using diagonal work splitting which gathers unprocessed nodes at level k (i.e., nodes that are reached by expanding the current work-unit) and recursively goes up in the search tree, also gathering unprocessed nodes of level k − i, i < k, until reaching level 1. This process results in a set of finer-grained work-units that induces a more balanced search space than static and first-fit divisions. In Ribeiro et al. use ESU as their core enumeration algorithm and propose a master-worker (M-W) architecture where a master-node manages a work-queue and distributes its work-units among slave workers. This strategy, DM-ESU, was the first to achieve near-linear speedups (≈128x on a 128-node cluster) on a set of heterogeneous network. A subsequent version BIB007 used GTries as their base algorithm and implemented a worker-worker (W-W) architecture where workers perform work stealing. DM-Gtries improves upon DM-ESU by using a faster enumeration algorithm (GTries) and having all workers perform subgraph enumeration (without wasting a node in work queue management). Similar implementations (based on W-W sharing and diagonal splitting) of GTries and FASE were also developed for shared memory (SM) environments, which achieved near-linear speedups in a 64-core machine BIB010 BIB011 . The main advantages of SM implementations is that work sharing is faster (since no message passing is necessary) and SM architectures (such as multicores) are a commodity while DM architectures (such as a cluster) are not. Instead of developing efficient work sharing strategies, Shahrivari and Jalili BIB012 try to avoid the unbalanced computation induced by vertice-based work-unit division. Subenum is an adaptation of ESU which uses edges as starting work-units, achieving near-linear speedup (≈10x on a 12-core machine). Using edges as starting work-units is also more suitable for the MapReduce model since edges are finer-grained work-units than vertices. In a follow-up work BIB013 , Shahrivari and Jalili propose a MapReduce algorithm, MRSUB, which greatly improves upon BIB006 , reporting a speedup of ≈ 34x on a 40-core machine. Like Subenum, MRSUB does not support work sharing between workers. A MapReduce algorithm with work sharing was put forward by Naser-eddin and Ribeiro BIB017 , henceforth called MR-Gtries. Using work sharing with timed redistribution (i.e., after a certain time, every worker stops and work is fully redistributed), they report a speedup of ≈ 26x on a 32-core machine. While MRSUB and MR-GTries efficiency is comparable (≈ 80%), the latter has a much faster sequential algorithm at its core; therefore, in terms of absolute runtime, MR-Gtries is the fastest MapReduce subgraph counting algorithm that we know of. Graphics processing units (GPUs) are processors specialized in image generation, but numerous general purpose tasks have been adapted to them BIB005 BIB008 BIB009 . GPUs are appealing due to their large number of cores, reaching hundreds or thousands of parallel threads whereas commodity multicores typically have no more than a dozen. However, algorithms that rely on graph traversal are not best suited for the GPU framework due to branching code, non-coalesced memory accesses and coarse work-unit granularity BIB009 . Milinković et al. were one of the firsts to follow a GPU approach (GPU-Orca), with limited success. Lin et al. BIB014 put forward a GPU algorithm (henceforth refereed to as Lin since it was unnamed) mostly targeted at network motif discovery but also with some emphasis on efficient subgraph enumeration. Lin avoids duplicate in a similar fashion to ESU BIB003 and auxiliary arrays are used to mitigate uncoalesced memory accesses. A BFS-style traversal is used (extending each subgraph 1 node at a time) to better balance workunits among threads. They compare Lin running on a 2496-core GPU (Tesla K20) against parallel CPU algorithms and report a speedup of ≈10x to a 6-core execution of the fastest CPU algorithm, DM-GTries. Rossi and Zhou proposed the first algorithm that combines multiple GPUs and CPUs BIB015 . Their method dynamically distributes work between CPUs and GPUs, where unbalanced computation is given to the CPU whereas GPUs compute the more regular work-units. Since their method was not named, we refer to it as GPU-PGD. Their hybrid CPU-GPU version achieves speedups of ≈ 20x to ≈ 200x when compared to sequential PGD, depending largely on the network. As mentioned in Section 3, PGD is one of the fastest methods for sequential subgraph counting. As such, GPU-PGD is the fastest subgraph counting algorithm currently available as far as we know. However, GPU-PGD is limited to 4-node subgraphs, while DM-GTries is the fastest general approach.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> Many natural structures can be naturally represented by complex networks. Discovering network motifs, which are overrepresented patterns of inter-connections, is a computationally hard task related to graph isomorphism. Sequential methods are hindered by an exponential execution time growth when we increase the size of motifs and networks. In this article we study the opportunities for parallelism in existing methods and propose new parallel strategies that adapt and extend one of the most efficient serial methods known from the Fanmod tool. We propose both a master-worker strategy and one with distributed control, in which we employ a randomized receiver initiated methodology capable of providing dynamic load balancing during the whole computation process. Our strategies are capable of dealing both with exact and approximate network motif discovery. We implement and apply our algorithms to a set of representative networks and examine their scalability up to 128 processing cores. We obtain almost linear speedups, showcasing the efficiency of our proposed approach and are able to reach motif sizes that were not previously achievable using conventional serial algorithms. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB003
A parallel cluster offers the opportunity to use multiple (heterogenous) machines to speedup computation. Clusters can have hundreds of processors and therefore, if speedup is linear, computation time is reduced from weeks to just a few hours. For work sharing to be efficiently performed on DM architectures one can either have a master-node mediating work sharing or have workers directly steal work from each other BIB001 BIB002 . Usually DM approaches are implemented directly using MPI [151-153, 164, 190] but higher level software, such as GraphLab, can also be used BIB003 . DM has the drawback of workers having to send messages through the network, making network bandwidth a bottleneck.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004
. SM approaches have the advantage in their underlying hardware being a commodity (multicore computers). Furthermore, workers in a SM environment do not communicate via network messages (since they can communicate directly in main memory), thus avoiding a bottleneck in the network bandwidth. However, the number of cores is usually very low when compared to DM, MapReduce, and GPU architectures. Algorithms on multicores tend to traverse the search space in a DFS fashion BIB003 BIB001 BIB002 BIB004 thus avoiding the storage of large number of subgraph occurrences in disk or main memory.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB003
MapReduce. The MapReduce paradigm has been successfully applied to problems where each worker executes very similar tasks, which is the case of subgraph counting. MapReduce is an inherently BFS method, whereas most subgraph counting algorithms are DFS-based. The biggest setback of using MapReduce is the huge amount of subgraph occurrences that are stored in files between each BFS-level iteration (corresponding to a node expansion) BIB001 BIB002 . To avoid this setback, one can instead store them in RAM when the number of occurrences fits in memory BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a "bag," in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores. Since PBFS employs a nonconstant-time "reducer" -- "hyperobject" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Monte Carlo simulation is ideally suited for solving Boltzmann neutron transport equation in inhomogeneous media. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop system. The interest in adopting GPUs for Monte Carlo acceleration is rapidly mounting, fueled partially by the parallelism afforded by the latest GPU technologies and the challenge to perform full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem and an eigenvalue/criticality problem were developed for CPU and GPU environments, respectively, to evaluate issues associated with computational speedup afforded by the use of GPUs. The results suggest that a speedup factor of 30 in Monte Carlo radiation transport of neutrons is within reach using the state-of-the-art GPU technologies. However, for the eigenvalue/criticality problem, the speedup was 8.5. In comparison, for a task of voxelizing unstructured mesh geometry that is more parallel in nature, the speedup of 45 was obtained. It was observed that, to date, most attempts to adopt GPUs for Monte Carlo acceleration were based on naive implementations and have not yielded the level of anticipated gains. Successful implementation of Monte Carlo schemes for GPUs will likely require the development of an entirely new code. Given the prediction that future-generation GPU products will likely bring exponentially improved computing power and performances, innovative hardware and software solutions may make it possible to achieve full-core Monte Carlo calculation within one hour using a desktop computer system in a few years. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management constructed from efficient prefix sum that achieves an asymptotically optimal O(|V|+|E|) work complexity. Our implementation delivers excellent performance on diverse graphs, achieving traversal rates in excess of 3.3 billion and 8.3 billion traversed edges per second using single and quad-GPU configurations, respectively. This level of performance is several times faster than state-of-the-art implementations both CPU and GPU platforms. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
GPUs are very appealing due to their large amount of parallel threads. Despite linear speedups being rare in the GPU, since they have such a large number of cores the gains can still be substantial. However, they are not well-suited for graph traversal algorithms. One of current best pure BFS algorithms BIB003 on the GPU only achieve a speedup of ≈ 8x (on a 448-core NVIDIA C2050) when compared to a 4-core CPU BFS algorithm BIB001 . By contrast, Monte Carlo calculations on a NVIDIA C2050 GPU achieve a speedup of ≈ 30x BIB002 when compared to a 4-core CPU implementation. This is mainly due to branching problems, uncoalesced memory accesses and coarse work-unit granularity, sometimes leading to almost non-existent speedups in subgraph counting . Using additional memory to efficiently store neighbors and smart work division help achieve some speedup BIB004 . Another approach is to combine CPUs and GPUs: CPUs handle unbalanced computation while GPUs execute regular computation BIB005 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> BackgroundComplex networks are studied across many fields of science and are particularly important to understand biological processes. Motifs in networks are small connected sub-graphs that occur significantly in higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Existing algorithms for finding network motifs are extremely costly in CPU time and memory consumption and have practically restrictions on the size of motifs.ResultsWe present a new algorithm (Kavosh), for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms. Our algorithm is based on counting all k-size sub-graphs of a given graph (directed or undirected). We evaluated our algorithm on biological networks of E. coli and S. cereviciae, and also on non-biological networks: a social and an electronic network.ConclusionThe efficiency of our algorithm is demonstrated by comparing the obtained results with three well-known motif finding tools. For comparison, the CPU time, memory usage and the similarities of obtained motifs are considered. Besides, Kavosh can be employed for finding motifs of size greater than eight, while most of the other algorithms have restriction on motifs with size greater than eight. The Kavosh source code and help files are freely available at: http://Lbb.ut.ac.ir/Download/LBBsoft/Kavosh/. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Many natural structures can be naturally represented by complex networks. Discovering network motifs, which are overrepresented patterns of inter-connections, is a computationally hard task related to graph isomorphism. Sequential methods are hindered by an exponential execution time growth when we increase the size of motifs and networks. In this article we study the opportunities for parallelism in existing methods and propose new parallel strategies that adapt and extend one of the most efficient serial methods known from the Fanmod tool. We propose both a master-worker strategy and one with distributed control, in which we employ a randomized receiver initiated methodology capable of providing dynamic load balancing during the whole computation process. Our strategies are capable of dealing both with exact and approximate network motif discovery. We implement and apply our algorithms to a set of representative networks and examine their scalability up to 128 processing cores. We obtain almost linear speedups, showcasing the efficiency of our proposed approach and are able to reach motif sizes that were not previously achievable using conventional serial algorithms. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB008
One possibility is to consider each vertex v ∈ V (G) as a work-unit and split them among workers. A worker p then computes all size-k subgraph occurrences that contain vertex v. Naive approaches have different workers finding repeated occurrences that need to be removed BIB001 , but efficient sequential algorithms have canonical representations that eliminate this problem BIB003 BIB006 BIB002 , making each work-unit independent. Using vertices as work-units has the drawback of creating very coarse work-units: different vertices induce search spaces with very different computational costs. For instance, counting all the subgraph occurrences that start (or eventually reach) a hub-like node is much more time-consuming than counting occurrences of a nearly isolated node. For algorithms with vertices as work-units to be efficient they can either try to find a good initial division BIB001 or enable work sharing between workers BIB007 BIB008 BIB004 BIB005 . Each of these work division strategies is discussed in Section 5.5.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
Due to the unbalanced search tree induced by vertex division, some algorithms use edges as work-units BIB002 BIB001 BIB003 BIB004 . The idea is similar to vertice division: distribute all e(v i , v j ) ∈ E(G) evenly among the workers. An initial edge division guarantees that all workers have an equal amount of 2-node subgraphs, which is not true for vertex division. However, for k ≥ 3 this strategy offers no guarantees in terms of workload balancing. Therefore, in regular networks (i.e. networks where all nodes have similar clustering coefficients) this strategy achieves a good speedup, but it is not scalable in general. Some methods BIB005 perform dynamic first-fit division (discussed in Section 5.5.2) instead the simple static division described.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB004
At the start of computation, only vertices and edges from the network are known. As the k-subgraph counting process proceeds, subgraphs of sizes k − i, i < k are found. Thus, the work-units divided among threads can be these intermediate states (incomplete subgraphs). Some BFS-based algorithms BIB002 BIB001 BIB003 begin with either edges or vertices as initial workunits and, at the end of each BFS-level, intermediate subgraphs are found and divided among workers. DFS-based methods expand each subgraph work-unit by one node until they reach a k-subgraph BIB004 BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB004
This approach is applicable only for DFS-like algorithms where, since the search tree is explored in a depth-first fashion, a work-tree is implicitly built during enumeration: when the algorithm is at level k of the search, unexplored candidates of stages {k − 1, k − 2, ..., 1} were previously generated. Then, instead of splitting top vertices from stage 1 only (as described in Section 5.3.1), the search-tree is split among sharing processors BIB003 BIB004 BIB001 BIB002 (more details on this in Section 5.5.3). Subgraph-trees are expected to be similar since both coarse-and fine-grained work-units are generated. Nevertheless, it is not guaranteed that work-units from the same level of the search tree induce similar work. This strategy also incurs the additional complexity of building the candidate-set of each level and splitting them among workers.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB005
Algorithms that adopt this strategy are typically MapReduce methods BIB001 BIB005 BIB002 or GPU BIB003 BIB004 approaches. MapReduce works intrinsically in BFS fashion, and GPUs are very inefficient when work is unbalanced and contains branching code. BFS starts by (i) splitting edges among workers, (ii) the processors compute the patterns of size-3 from each edge (size-2 subgraphs), (iii) the patterns of size-3 are themselves split among processors and (iv) this process is repeated until the desired size-k patterns are obtained. The idea of BFS is to give large amounts of fine-grained work-units to each worker, thus making work division more balanced since these work-units induce similar work, making this approach more suitable for methods that require regular data. However, the main drawback is that these algorithms need to store partial results (which grow exponentially as k increases) and synchronize at the end of each BFS-level.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Graphlets are induced subgraphs of a large network and are important for understanding and modeling complex networks. Despite their practical importance, graphlets have been severely limited to applications and domains with relatively small graphs. Most previous work has focused on exact algorithms, however, it is often too expensive to compute graphlets exactly in massive networks with billions of edges, and finding an approximate count is usually sufficient for many applications. In this work, we propose an unbiased graphlet estimation framework that is (a) fast with significant speedups compared to the state-of-the-art, (b) parallel with nearly linear-speedups, (c) accurate with <1% relative error, (d) scalable and space-efficient for massive networks with billions of edges, and (e) flexible for a variety of real-world settings, as well as estimating macro and micro-level graphlet statistics (e.g., counts) of both connected and disconnected graphlets. In addition, an adaptive approach is introduced that finds the smallest sample size required to obtain estimates within a given user-defined error bound. On 300 networks from 20 domains, we obtain <1% relative error for all graphlets. This is significantly more accurate than existing methods while using less data. Moreover, it takes a few seconds on billion edge graphs (as opposed to days/weeks). These are by far the largest graphlet computations to date. <s> BIB005
Static. The simplest form of work division is to produce an initial distribution of work-units and proceed with the parallel computation, without ever spending time dividing work during runtime. Trying to obtain an estimation of the work beforehand BIB004 BIB001 is valuable but limited: if the estimation is done quickly but is not very precise (such as using node-degrees or clustering coefficients to estimate work-unit difficulty) little guarantees are offered that the work division is balanced, and obtaining a very precise estimation is as computationally expensive as doing subgraph enumeration itself. Following a BFS approach BIB002 BIB003 helps balancing out the work-units and a static work division at each BFS-level is usually sufficient to obtain good results. However, those strategies have limitations as discussed in Section 5.4.1. Some analytic works, which do not rely on explicit subgraph enumeration, do not need advanced work division strategies because their algorithm is almost embarrassingly parallel BIB005 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB004
Splitting. Algorithms that employ this strategy BIB003 BIB004 BIB001 BIB002 perform an initial static work division. They do not need a sophisticated criteria to choose to whom work-units are assigned because work will be dynamically redistributed during runtime: whenever workers are idle, some work will be relocated from busy workers to them. Furthermore, instead of simply giving half of their top-level work-units away and keeping the other half, a busy worker fully splits its work tree The main idea is to build work-units of both fine-and coarse-grained sizes, and this is particularly helpful in cases where a worker becomes stuck managing a very complex initial work-unit; this way, that work-unit is split in half, and it can be split iteratively to other workers if needed. These work-units can then either be stored in a global work queue, which a master worker is responsible of managing BIB001 BIB002 , or sharing is conducted between worker threads themselves BIB003 BIB004 (more details on Sections 5.6.1 and 5.6.2, respectively).
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Timed <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB001
Redistribution. Timed Redistribution is a way to avoid estimating work during runtime while guaranteeing that every worker has work (after a while). Workers first receive work and try to process as much as they can. After a certain time, they all stop and work is redistributed. This strategy is specially useful when worker communication is not practical, such as in a MapReduce environment BIB001 on in the GPU. Setting an adequate threshold for work redistribution has a great impact: redistributing work too quickly has the drawback of wasting too much time in work division, while redistributing work too late has the drawback of having idle workers. One solution is to use an adaptive threshold BIB001 : if workers are too often without work, the threshold of the next iteration is lower, if workers are too often with much work left to compute, the threshold of the next iteration is higher.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Dynamic load balancing is crucial for the performance of many parallel algorithms. Random polling, a simple randomized load balancing algorithm, has proved to be very efficient in practice for applications like parallel depth first search. This paper presents a detailed analysis of the algorithm taking into account many aspects of the underlying machine and the application to be load balanced. It derives tight scalability bounds which are for the first time able to explain the superior performance of random polling analytically. In some cases, the algorithm even turns out to be optimal. Some of the proof-techniques employed might also be useful for the analysis of other parallel algorithms. > <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
Since work is unbalanced for enumeration algorithms, work sharing can be used in order to balance work during runtime. 5.6.1 Master-Worker (M-W). This type of work sharing is mostly adopted in distributed memory (DM) environments since workers do not share positions of memory that they can easily access and use to communicate. A master worker initially splits the work-units among the workers (slaves) and then manages load balancing. Load balancing can be achieved by managing a global queue where slaves put some of their work, to be later redistributed by the master . This strategy implies that the master is not being used the enumeration and that there is a need communication over the network. 5.6.2 Worker-Worker (W-W). Shared memory (SM) environments allow for direct communication between workers, therefore a master node is redundant. In this strategy, an idle worker asks a random worker for work BIB002 BIB003 . One could try to estimate which worker should be polled for work (which is computationally costly) but random polling has been established as an efficient heuristic for dynamic load balancing BIB001 . After the sharing process, computation resumes with each worker evolved in the exchange computing their part of the work. Computation ends when all workers are polling for work. This strategy achieves a balanced work-division during runtime, and the penalty caused by worker communication is negligible BIB002 BIB003 . Most implementations of W-W sharing are built on top of relatively homogeneous systems, such as multiworkered CPUs BIB004 or clusters of similar processors . In these systems, since all workers are equivalent, it is irrelevant which ones get a specific easy (or hard) work-unit, thus only load balancing needs to be controlled. Strategies that combine CPUs with GPUs, for instance, can split tasks in a way that takes advantage of both architectures: GPUs are very fast for regular tasks while CPUs can deal with irregular ones. For instance, a shared deque can be kept where workers, either GPUs or CPUs, put work on or take work from BIB005 ; the queue is ordered by complexity: complex tasks are placed at the front, and simple tasks at the end. The main idea is that CPUs handle just a few complex work-units from the front of the deque while GPUs take large bundles of work-units from the back.
A survey of commercial frameworks for the Internet of Things <s> I. INTRODUCTION <s> This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. <s> BIB001 </s> A survey of commercial frameworks for the Internet of Things <s> I. INTRODUCTION <s> Internet-of-Things (IoT) is the convergence of Internet with RFID, Sensor and smart objects. IoT can be defined as “things belonging to the Internet” to supply and access all of real-world information. Billions of devices are expected to be associated into the system and that shall require huge distribution of networks as well as the process of transforming raw data into meaningful inferences. IoT is the biggest promise of the technology today, but still lacking a novel mechanism, which can be perceived through the lenses of Internet, things and semantic vision. This paper presents a novel architecture model for IoT with the help of Semantic Fusion Model (SFM). This architecture introduces the use of Smart Semantic framework to encapsulate the processed information from sensor networks. The smart embedded system is having semantic logic and semantic value based Information to make the system an intelligent system. This paper presents a discussion on Internet oriented applications, services, visual aspect and challenges for Internet of things using RFID, 6lowpan and sensor networks. <s> BIB002
For more than a decade the Internet of Things (IoT) has boosted the development of standards based messaging protocols. Recently, encouraged by the likes of Ericsson and Cisco with estimates of 50 billion Internet connected devices by 2020 , attention has shifted from interoperability and message layer protocols towards application frameworks supporting interoperability amongst IoT product suppliers. The IoT is the interconnection of ubiquitous computing devices for the realization of value to end users BIB001 . This definition encompasses "data collection" for the betterment of understanding and "automation" of tasks for optimization of time. The IoT field has evolved within application silos with domain specific technologies, such as health care, social networks, manufacturing and home automation. To achieve a truly "interconnected network of things" the challenge is enabling the combination of heterogeneous technologies, protocols and application requirements to produce an automated and knowledge based environment for the end user. In BIB002 , Singh et al. elaborate on three main visions for the IoT: Internet Vision, Things Vision and Semantic Vision. Depending on which vision is chosen the approach taken by a framework will differ and provide a better result for those applications. As surveyed by Perera et al. in , there are many existing IoT products and applications available. These however are based on proprietary frameworks which are not available for development of customized applications. The frameworks presented in this survey are all targeted as a basis for development of IoT applications. This paper presents a survey of highly regarded commercial frameworks and platforms which are being used for Internet of Things applications. Many of the frameworks rely on high level software layers to assist in abstracting between protocols. The high level software layer provides flexibility when interconnecting between different technologies and is well suited for working in cloud environments. In some cases the frameworks look into standardizing interfaces, defining a software service bus or simply opting to choose a single network protocol and set of application protocols. This is further discussed as follows; in Section II introduces the concept of frameworks and defines three categories of frameworks used in this survey. Sections III and IV then introduces the frameworks and platforms studied, grouped by application area. In Section V a discussion of a comparative analysis of the frameworks and platforms is presented. The survey finishes with a few concluding remarks in Section VI.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Motivation <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Motivation <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB002
Multiple research studies have shown that information diffuses fast over online social networks. In a pioneering study, BIB001 showed that, characteristics of diffusion of information on social microblogging platforms, like Twitter, is similar to news media. They stimulated the notion that, Twitter-like microblogging networks are hybrid in nature, combining the characteristics of social and information networks, unlike traditional social networks. In the meantime, another This term corresponds to the information content in social network messages, as found in most of the literature, such as [Galuba et al. 2010] , BIB002 and many others. Literature also treats it as a group of blog posts hyperlinking to other blog posts .
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB003
This term captures the movement of information cascades from one participant / portion of the social network to another. Several models attempt to capture the causes and dynamics of information diffusion content (cascades) in the literature, such as BIB002 , BIB001 , BIB003 ] and many others.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> In many online social systems, social ties between users play an important role in dictating their behavior. One of the ways this can happen is through social influence, the phenomenon that the actions of a user can induce his/her friends to behave in a similar way. In systems where social influence exists, ideas, modes of behavior, or new technologies can diffuse through the network like an epidemic. Therefore, identifying and understanding social influence is of tremendous interest from both analysis and design points of view. This is a difficult task in general, since there are factors such as homophily or unobserved confounding variables that can induce statistical correlation between the actions of friends in a social network. Distinguishing influence from these is essentially the problem of distinguishing correlation from causality, a notoriously hard statistical problem. In this paper we study this problem systematically. We define fairly general models that replicate the aforementioned sources of social correlation. We then propose two simple tests that can identify influence as a source of social correlation when the time series of user actions is available. We give a theoretical justification of one of the tests by proving that with high probability it succeeds in ruling out influence in a rather general model of social correlation. We also simulate our tests on a number of examples designed by randomly generating actions of nodes on a real social network (from Flickr) according to one of several models. Simulation results confirm that our test performs well on these data. Finally, we apply them to real tagging data on Flickr, exhibiting that while there is significant social correlation in tagging behavior on this system, this correlation cannot be attributed to social influence. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB004
This term is often used to capture the notion of a latter individual participant of a social network taking an action that is similar to another former participant's action, by way of the latter explicitly or implicitly imitating the action of the former BIB002 ]. An example of imitation is retweeting on Twitter. Many works in literature model information diffusion taking social influence into consideration, such as [Galuba et al. 2010] , BIB003 , BIB004 and others. Homophily Familiarity is perceived when two or more individuals know each other (or, in the context of online social networks, befriend with each other or connect to each other). Similarity is perceived when two or more of individuals like one or more shared objects, items, topics etc. Homophily is the phenomenon of similar people also becoming socially familiar BIB001 ].
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> For at least twenty‐five years, the concept of the clique has had a prominent place in sociometric and other kinds of sociological research. Recently, with the advent of large, fast computers and with the growth of interest in graph‐theoretic social network studies, research on the definition and investigation of the graph theoretic properties of clique‐like structures has grown. In the present paper, several of these formulations are examined, and their mathematical properties analyzed. A family of new clique‐like structures is proposed which captures an aspect of cliques which is seldom treated in the existing literature. The new structures, when used to complement existing concepts, provide a new means of tapping several important properties of social networks. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Abstract Social network researchers have long sought measures of network cohesion, Density has often been used for this purpose, despite its generally admitted deficiencies. An approach to network cohesion is proposed that is based on minimum degree and which produces a sequence of subgraphs of gradually increasing cohesion. The approach also associates with any network measures of local density which promise to be useful both in characterizing network structures and in comparing networks. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Here we study a variant of maximal clique enumeration problem by incorporating a minimum size criterion. We describe preprocessing techniques to reduce the graph size. This is of practical interest since enumerating maximal cliques is a computationally hard problem and the execution time increases rapidly with the input size. We discuss basics of an algorithm for enumerating large maximal cliques which exploits the constraint on minimum size of the desired maximal cliques. Social networks are prime examples of large sparse graphs where enumerating large maximal cliques is of interest. We present experimental results on the social network formed by the call detail records of one of the world's largest telecom service providers. Our results show that the preprocessing methods achieve significant reduction in the graph size. We also characterize the execution behaviour of our large maximal clique enumeration algorithm. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Large volumes of spatio-temporal-thematic data being created using sites like Twitter and Jaiku, can potentially be combined to detect events, and understand various 'situations' as they are evolving at different spatio-temporal granularity across the world. Taking inspiration from traditional image pixels which represent aggregation of photon energies at a location, we consider aggregation of user interest levels at different geo-locations as social pixels. Combining such pixels spatio-temporally allows for creation of social images and video. Here, we describe how the use of relevant (media processing inspired) situation detection operators upon such 'images', and domain based rules can be used to decide relevant control actions. The ideas are showcased using a Swine flu monitoring application which uses Twitter data. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB009
This represent a group of individuals with a large degree of familiarity. The familiarity either follows a certain structure ensuring a notional sufficiently of connections such as maximal cliques BIB003 , k-cores BIB002 ], k-plexes BIB001 etc., or properties such as high modularity where the connection density within the given group is significantly higher compared to the other individuals belonging to the same social network . Topic In general, a topic captures a coherent of set concepts that are semantically/conceptually related to each other. In the context of social network content analysis, a topic notionally corresponds to a set of correlated user-generated concept. In literature, topics are often identified using techniques such as (a) hashtags of microblogs like Twitter (ex: BIB007 ), (b) bursty keyword identification (ex: BIB004 and BIB005 ), and (c) probability distributions of latent concepts over keywords in user generated content (ex: BIB008 ). (Geo-social) Spread of topics This is usually a term used to portray the maximum (or characteristic) geographical span that a topic has reached out, or expected to reach out to. Literature that addresses geo-social spread of topics include BIB009 , , BIB006 ] and many others.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the "proximity" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> With the recent rise in popularity and size of social media, there is a growing need for systems that can extract useful information from this amount of data. We address the problem of detecting new events from a stream of Twitter posts. To make event detection feasible on web-scale corpora, we present an algorithm based on locality-sensitive hashing which is able overcome the limitations of traditional approaches, while maintaining competitive results. In particular, a comparison with a state-of-the-art system on the first story detection task shows that we achieve over an order of magnitude speedup in processing time, while retaining comparable performance. Event detection experiments on a collection of 160 million Twitter posts show that celebrity deaths are the fastest spreading news on Twitter. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Streaming user-generated content in the form of blogs, microblogs, forums, and multimedia sharing sites, provides a rich source of data from which invaluable information and insights maybe gleaned. Given the vast volume of such social media data being continually generated, one of the challenges is to automatically tease apart the emerging topics of discussion from the constant background chatter. Such emerging topics can be identified by the appearance of multiple posts on a unique subject matter, which is distinct from previous online discourse. We address the problem of identifying emerging topics through the use of dictionary learning. We propose a two stage approach respectively based on detection and clustering of novel user-generated content. We derive a scalable approach by using the alternating directions method to solve the resulting optimization problems. Empirical results show that our proposed approach is more effective than several baselines in detecting emerging topics in traditional news story and newsgroup data. We also demonstrate the practical application to social media analysis, based on a study on streaming data from Twitter. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Reducing the impact of seasonal influenza epidemics and other pandemics such as the H1N1 is of paramount importance for public health authorities. Studies have shown that effective interventions can be taken to contain the epidemics if early detection can be made. Traditional approach employed by the Centers for Disease Control and Prevention (CDC) includes collecting influenza-like illness (ILI) activity data from “sentinel” medical practices. Typically there is a 1–2 week delay between the time a patient is diagnosed and the moment that data point becomes available in aggregate ILI reports. In this paper we present the Social Network Enabled Flu Trends (SNEFT) framework, which monitors messages posted on Twitter with a mention of flu indicators to track and predict the emergence and spread of an influenza epidemic in a population. Based on the data collected during 2009 and 2010, we find that the volume of flu related tweets is highly correlated with the number of ILI cases reported by CDC. We further devise auto-regression models to predict the ILI activity level in a population. The models predict data collected and published by CDC, as the percentage of visits to “sentinel” physicians attributable to ILI in successively weeks. We test models with previous CDC data, with and without measures of Twitter data, showing that Twitter data can substantially improve the models prediction accuracy. Therefore, Twitter data provides real-time assessment of ILI activity. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB014 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB015 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB016 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB017 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB018
This term notionally corresponds to the temporal span that a topic stays alive from being introduced into the social network, reach its peak of geographical spread and social depth, and decline till the point it no longer exists in the network. Several works analyze topic lifecycle, such as BIB014 , BIB005 , BIB018 , BIB006 ] and many more. Topical information diffusion A body of research tends to model information diffusion, seeding from the topics underlying within the information cascade content, such as BIB017 , , and others. These works tend to have the topical nature of information diffusion at the heart of their models. body of research emerged, that attempted to identify topics and spot trending topics being discussed on the online social media. BIB005 designed TwitterMonitor, for detecting and analyzing trends, and studying trend lifecycle. Using a two-stage approach comprised of detecting and clustering new content generated by users, founded on dictionary learning to detect emerging topics on Twitter, BIB011 applied their system on streaming data to empirically demonstrate the effectiveness of their approach. attempted to predict topics that would draw attention in future. Other studies have also been conducted for trend and topic lifecycle analysis on social networks, specifically Twitter, such as BIB018 , BIB014 , BIB012 , and BIB007 . Predicting the existence of social connections between given pairs of individual members of social networks, in form of social links, has been an area of long-standing research. Link prediction algorithms that use graph properties have existed for long. Some well-known link prediction methods are the Adamic-Adar method BIB001 ], Jaccard's coefficient , rooted PageRank BIB003 , Katz method and SimRank [Jeh and Widom 2002] . BIB008 ] investigated the effectiveness of content in social network link prediction, and experimented on Twitter. BIB015 proposed a "supervised topic classification and link prediction system on Twitter". Identifying struc-tural communities that form implicitly based upon familiarity within social networks, rather than by explicit interest-based group memberships, has been another area of long-standing research. There are multiple definitions of communities; however, the modularity method by ] is arguably the most well-known and well-accepted definition. Approximation algorithms to compute modularity fast exist, one of the most well-known algorithms being BGLL proposed by BIB004 . While links and communities are rooted to the notion of familiarity, another popular topic of research in online social networks is homophily BIB002 . Homophily is the phenomenon of similar people also being socially familiar. Studies such as ] considered similarity and social familiarity together, to investigate how information diffusion is impacted by homophily. Understanding social influence, and analyzing its impact on diffusion characteristics in the context of topics and information, such as spread and longevity, has received immense research focus. Several works have investigated online social networks and microblogs, and have created information diffusion models that account for the effect of influence of the participants. BIB009 ] created an influence model using the Flickr social network graph and user action logs. Identifying who influences whom and exploring whether participants would propagate the same information in absence of social signals, BIB016 ] measured the effect of social networking mediums in information dissemination, and validated on 253 million subjects. BIB010 modeled the "global influence" of social network participants, using the rate of information diffusion via the social network. Many other works have explored influence and its impact on social networks, along the aspects of information diffusion, topics, interest and the lifecycle of topics. Addressing the geo-temporal aspects of information diffusion on social networks, researchers have attempted to model the evolution that happens to information and topics over time, and across geographical boundaries. BIB017 ] characterized the diffusion of ideas on social networks by conducting a spatio-temporal analysis. They showed that popular topics tend to cross regional boundaries aggressively. found temporal evolution of topical discussions on Twitter to localize geographically, and evolve more strongly at finer geo-spatial granularities. For instance, they found that, city level discussions evolve more compared to country level. BIB013 used Twitter data to collect data pertaining to influenza-like diseases. Using Twitter data, their model could substantially improve the influenza epidemic predictions made from Government's disease control (CDC) data. Overall, identifying and characterizing topics and information diffusion has received significant research attention. Clearly, significant research attention has been invested towards modeling information diffusion, correlating the phenomenon with network structures, and investigating the roles and impacts of topics, the lifecycle of topics, influence, familiarity, similarity, homophily and spatio-temporal factors. In the current article, we conduct a survey of literature that has created significant impact in this space, and explore the details of some of the models and methods that have been widely adopted by researchers. The aim is to provide an overview of the representative state-of-the-art models, that perform topic analysis, capture information diffusion, and explore the properties of social connections in this context, for online social networks. We believe our article will be useful for researchers to identify the current literature, and help in identifying what can be improved over the state of the art. The rest of the paper is organized as follows. In Section 2, we explore the literature for topic based link prediction and community discovery on social networks. This is followed by a literature survey for information diffusion, and role of user influence, in Section 3. Section 4 covers the literature addressing lifecycle of topics, covering the inception, spread and evolution of the topics. The literature addressing the impact of social familiarity and topical (and interest) similarity is covered in Section 5. The literature for spatio-temporal analysis of social network discussion topics has been surveyed in Section 6. A high-level discussion of problems of potential interest, and problems where we believe existing solutions can be improved, is provided in Section 7.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> Automatic detection of communities (or cohesive groups of actors in social network) in online social media platforms based on user interests and interaction is a problem that has recently attracted a lot of research attention. Mining user interactions on Twitter to discover such communities is a technically challenging information retrieval task. We present an algorithm - iTop - to discover interaction based topic centric communities by mining user interaction signals (such as @-messages and retweets) which indicate cohesion. iTop takes any topic as an input keyword and exploits local information to infer global topic-centric communities. We evaluate the discovered communities along three dimensions: graph based (node-edge quality), empirical-based (Twitter lists) and semantic based (frequent n-grams in tweets). We conduct experiments on a publicly available scrape of Twitter provided by InfoChimps via a web service. We perform a case study on two diverse topics - 'Computer Aided Design (CAD)' and 'Kashmir' to demonstrate the efficacy of iTop. Empirical results from both case studies show that iTop is successfully able to discover topic-centric, interaction based communities on Twitter. <s> BIB003
Link prediction is the problem of predicting the existence of social links amongst social network participant pairs. In traditional literature, the prediction of links has mostly been carried out by investigating social network graph properties. Since information spreads on online social networks over topics of discussions, predicting links based upon information content essentially gives an intuition of the pathway that given content (information) would diffuse. This also holds for communities formed on social network graphs, over links inferred from user-generated topical text content. BIB001 Predicts links based upon user-generated content using LDA. Shows that their content based link prediction outperforms graph-structure based link prediction. BIB002 Creates user profiles from user-generated tweets. Assigns topics to user profiles. Measures similarity of user profile pairs using L-LDA and SVM. Shows that L-LDA outperforms SVM for Twitter user profile classification. Uses profile pair similarity thus obtained as a predictor of social links. BIB003 Discovers topical communities on user-generated messages on Twitter. Mines retweets, replies and mentions as user-generated indicative signals. Infers global topic-specific communities. Shows the effectiveness of their method by evaluating communities across three dimensions, namely graph (friendship connections), empirical (actual user profiles) and semantic (frequent n-grams).
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the "proximity" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> With hundreds of millions of participants, social media services have become commonplace. Unlike a traditional social network service, a microblogging network like Twitter is a hybrid network, combining aspects of both social networks and information networks. Understanding the structure of such hybrid networks and predicting new links are important for many tasks such as friend recommendation, community detection, and modeling network growth. We note that the link prediction problem in a hybrid network is different from previously studied networks. Unlike the information networks and traditional online social networks, the structures in a hybrid network are more complicated and informative. We compare most popular and recent methods and principles for link prediction and recommendation. Finally we propose a novel structure-based personalized link prediction model and compare its predictive performance against many fundamental and popular link prediction methods on real-world data from the Twitter microblogging network. Our experiments on both static and dynamic data sets show that our methods noticeably outperform the state-of-the-art. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB009
Several works in literature, such as BIB007 and BIB008 , have addressed predicting social links between pairs of users, looking at the graph attributes. BIB001 BIB005 . However, these studies explore graph structure and properties, and do not consider content semantics. One body of work uses user-generated as the foundation of the link prediction process. In one such work, BIB006 ] study the effectiveness of content in predicting links on social networks, using Twitter data for experiments. Using Twitter's GardenHose API, they collect around 15% of all messages on Twitter, posted in January 2010. The extract a representative subset by sampling the first 500 people who posted at least 16 messages within this period, and subsequently crawl 500 randomly selected followers of each of these people. They end up with a data set comprising of 21, 306 users, 837, 879 messages, and 10, 578, 934 word tokens posted as part of these messages. Subsequently, they tokenize while factoring for the non-standard orthography that is inherent to Twitter messages. They tokenize on whitespaces and apostrophes. They use the # mark to indicate a topic, and the @ mark to indicated retweets. Removing the low-frequency words that appear less than 50 times from the vocabulary, they are left with 11, 425 tokens. They classify out-of-vocabulary items were classified as either words, URLs, or numbers. They use LDA BIB002 ] for predicting pairwise links on the content graph. To do so, they gather together all of the messages from a given user into a single document, as the length of Twitter messages are short. Thus, their model learns latent topics that characterize authors, rather than messages. They subsequently compute author similarity using dot product of topic proportions. They learn weight proportions of each topic z using the method of Chang and Blei BIB003 as exp (−η , as the predicted strength of connection between authors i and j.z i andz j denote the expected topic proportions for author i and j, η denotes a vector of learned regression weights, and ν is an intercept term necessary if a the link prediction function returns a probability. They compare their results with the results obtained by the methodology of LibenNowell and Kleinberg BIB001 , which depends upon the graph structure but not upon user-generated content. The content-based model performs significantly better than the structure-based one, establishing a logical foundation to consider user-generated content as an effective instrument to predict social links. In another work, BIB009 propose a "supervised topic classification and link prediction system on Twitter". They create user profiles based upon the posts made the by the users. Their work uses the Labeled-LDA (L-LDA) technique by BIB004 ], a generative model for multiply labeled corpora that generates a labeled document collection. L-LDA assigns one topic to each label in a multiply-labeled document, unlike traditional LDA and its supervised embodiments. It incorporates supervision to extend LDA BIB002 and incorporates a mixture model to extend Multinomial Naive Bayes. Each document is modeled as a mix of elemental topics by L-LDA. Each word is generated from a topic. The topic model is constrained to only use topics corresponding to a document's observed set of labels. They "set the number of topics in L-LDA as the number of unique labels K in the corpus", and run LDA such that the multinomial mixture distribution θ (d) is defined only for topics corresponding to the labels Λ (d) , the binary list of topics indicating the presence/absence of a topic l inside document d. To enable this constraint, they first generate the document labels Λ (d) for each topic k using a Bernoulli coin toss, with a labeling prior probability Φ k . They subsequently define the document label vector as: If the i th document label and the j th topic are the same, then the (i, j) th element of the L (d) matrix has a value of 1, else zero. The "parameter vector of the Dirichlet prior α = (α 1 , ..., α K ) T " uses the The dimensions of the α (d) vector "correspond to the topics represented by the document labels". Finally, θ (d) is drawn from this Dirichlet distribution. They experiment on Twitter data using the L-LDA technique. They assign topics to user profiles, and measured the similarity of user profile pairs. They find L-LDA to significantly outperform Support Vector Machines (SVM) for user profile classification, in cases where the training data is limited, and provide similar performance as SVM where sufficient training data is available. They thereby infer L-LDA to be a good technique to classify infrequent topics and (short) profiles of users having moderate activity. They treat user profile pair similarities as predictor of social links.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Although the inference of global community structure in networks has recently become a topic of great interest in the physics community, all such algorithms require that the graph be completely known. Here, we define both a measure of local community structure and an algorithm that infers the hierarchy of communities that enclose a given vertex by exploring the graph one vertex at a time. This algorithm runs in time O(k2d) for general graphs when d is the mean degree and k is the number of vertices to be explored. For graphs where exploring a new vertex is time consuming, the running time is linear, O(k). We show that on computer-generated graphs the average behavior of this technique approximates that of algorithms that require global knowledge. As an application, we use this algorithm to extract meaningful local clustering information in the large recommender network of an online retailer. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Automatic detection of communities (or cohesive groups of actors in social network) in online social media platforms based on user interests and interaction is a problem that has recently attracted a lot of research attention. Mining user interactions on Twitter to discover such communities is a technically challenging information retrieval task. We present an algorithm - iTop - to discover interaction based topic centric communities by mining user interaction signals (such as @-messages and retweets) which indicate cohesion. iTop takes any topic as an input keyword and exploits local information to infer global topic-centric communities. We evaluate the discovered communities along three dimensions: graph based (node-edge quality), empirical-based (Twitter lists) and semantic based (frequent n-grams in tweets). We conduct experiments on a publicly available scrape of Twitter provided by InfoChimps via a web service. We perform a case study on two diverse topics - 'Computer Aided Design (CAD)' and 'Kashmir' to demonstrate the efficacy of iTop. Empirical results from both case studies show that iTop is successfully able to discover topic-centric, interaction based communities on Twitter. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Homophily suggests that people tend to befriend others with shared traits, such as similar topical interests or overlapping social circles. We study how people communicate online in term of conversation topics from an egocentric viewpoint using a dataset from Facebook. We find that friends who favor similar topics form topic-based clusters; these clusters have dense connectivities, large growth rates, and little overlap. <s> BIB006
In the social network analysis literature, communities are identified by one of the following. (a) Individuals subscribe to existing interest groups, and thereby start explicitly belonging to a community based upon their similarity of interests. (b) Groups of individuals known to each other directly, or having a large number of mutual friends, are said to belong to the same implicit community. While several definitions of structural communities have emerged over time, modularity-based com-munity finding ] is the most popular methodology. Modularity-based community finding from a given graph is inherently expensive. BIB003 propose BGLL as a fast approximation algorithm towards this. BIB004 investigate structural and functional communities, and impacts of structure on community functions. Literature mostly explores community discovery from explicit links such as social friendships. However, some work also exists to find communities formed upon links inferred from usergenerated topics and/or content. In one such work, BIB005 ] discover topical communities on Twitter tweets. They mine retweets, replies and mentions, collectively labeling these as @-messages. They create an edge between a vertex (user) pair v x and v y if I(RT xy , @ xy ) = 0, where I(RT xy , @ xy ) is the @-message based interaction strength between v x and v y . They adapt the local modularity (LM) algorithm BIB002 ] for directed graphs, to discover communities of interest using local information. Their framework comprises of four blocks: warm start, expand, filter and iterate. For warm start, they take a topic of interest t i as input, and conducts a Twitter user bio search, where bio comprises of the publicly available profile information of the user such as name, location, URL and biography. The list of users found to have related interest and inclination towards this topic, as found by the search, are included as parts of communities of interest, denoted as C t i current . In the expand step, they take this list of users, and adds vertices U t i , where β t i ∈ C t i current has an edge with at least one vertex in U t i . The weight of an edge is defined by the closeness of the user pair in terms of @-messages. For instance, a directed edge X → Y is drawn from X to Y , iff X has interacted with Y . Further, weight w is assigned based upon the interaction strength. This, graph G max is stable or consistently negative, indicating that there is no further place for improvement. Thus, they identify topic-specific global communities, taking topic as an input keyword. They "evaluate the communities along three dimensions, namely graph (vertex-edge quality), empirical (actual Twitter profiles) and semantic (n-grams frequently appearing in tweets)". In another work, BIB006 , explore the Facebook social network for topic based cluster analysis, and shows that friends that favor similar topics form topic-based clusters. This study further shows that these clusters have dense connectivity, large growth rate, and little overlap. Cross-entropy BIB001 , which is based upon Kullback-Leibler (K-L) divergence , and normalized mutual information , are relevant measurements frequently appearing in literature of communities, user profile pair similarities and topical divergence computation.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Models of collective behavior are developed for situations where actors have two alternatives and the costs and/or benefits of each depend on how many other actors choose which alternative. The key concept is that of "threshold": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or "equilibrium" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties). <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of"distance"in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Whether they are modeling bookmarking behavior in Flickr or cascades of failure in large networks, models of diffusion often start with the assumption that a few nodes start long chain reactions, resulting in large-scale cascades. While reasonable under some conditions, this assumption may not hold for social media networks, where user engagement is high and information may enter a system from multiple disconnected sources. Using a dataset of 262,985 Facebook Pages and their associated fans, this paper provides an empirical investigation of diffusion through a large social media network. Although Facebook diffusion chains are often extremely long (chains of up to 82 levels have been observed), they are not usually the result of a single chain-reaction event. Rather, these diffusion chains are typically started by a substantial number of users. Large clusters emerge when hundreds or even thousands of short diffusion chains merge together. This paper presents an analysis of these diffusion chains using zero-inflated negative binomial regressions. We show that after controlling for distribution effects, there is no meaningful evidence that a start node’s maximum diffusion chain length can be predicted with the user's demographics or Facebook usage characteristics (including the user's number of Facebook friends). This may provide insight into future research on public opinion formation. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Microblogging sites are a unique and dynamic Web 2.0 communication medium. Understanding the information flow in these systems can not only provide better insights into the underlying sociology, but is also crucial for applications such as content ranking, recommendation and filtering, spam detection and viral marketing. In this paper, we characterize the propagation of URLs in the social network of Twitter, a popular microblogging site. We track 15 million URLs exchanged among 2.7 million users over a 300 hour period. Data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades and the communication dynamics. Based on these results we propose a propagation model that predicts which users are likely to mention which URLs. The model correctly accounts for more than half of the URL mentions in our data set, while maintaining a false positive rate lower than 15%. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Spreading of information, ideas or diseases can be conveniently modelled in the context of complex networks. An analysis now reveals that the most efficient spreaders are not always necessarily the most connected agents in a network. Instead, the position of an agent relative to the hierarchical topological organization of the network might be as important as its connectivity. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social media played a central role in shaping political debates in the Arab Spring. A spike in online revolutionary conversations often preceded major events on the ground. Social media helped spread democratic ideas across international borders.No one could have predicted that Mohammed Bouazizi would play a role in unleashing a wave of protest for democracy in the Arab world. Yet, after the young vegetable merchant stepped in front of a municipal building in Tunisia and set himself on fire in protest of the government on December 17, 2010, democratic fervor spread across North Africa and the Middle East.Governments in Tunisia and Egypt soon fell, civil war broke out in Libya, and protestors took to the streets in Algeria, Morocco, Syria, Yemen and elsewhere. The Arab Spring had many causes. One of these sources was social media and its power to put a human face on political oppression. Bouazizi’s self-immolation was one of several stories told and retold on Facebook, Twitter, and YouTube in ways that inspired dissidents to organize protests, criticize their governments, and spread ideas about democracy. Until now, most of what we have known about the role of social media in the Arab Spring has been anecdotal.Focused mainly on Tunisia and Egypt, this research included creating a unique database of information collected from Facebook, Twitter, and YouTube. The research also included creating maps of important Egyptian political websites, examining political conversations in the Tunisian blogosphere, analyzing more than 3 million Tweets based on keywords used, and tracking which countries thousands of individuals tweeted from during the revolutions. The result is that for the first time we have evidence confirming social media’s critical role in the Arab Spring. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB011
Diffusion of information content on social networks such as Twitter and Facebook, has been a major research focus BIB005 BIB008 ] BIB004 ]. Several information diffusion models, such as Linear Threshold BIB001 and Independent Cascades BIB002 , and variations of these models, have been built. Models have attempted to capture diffusion path, degree of diffusion for specific information on observed social networks, and the role of influence of participants in the information flow process. Proposes a propagation model predicting which URL will each given user mention, and shows the effectiveness of their model. [ BIB006 ] Identifies a network core using k-shell decomposition analysis, where the more central vertices in the graph receive higher k-values. The innermost vertices form the graph core. Shows that the network core members are best spreaders of information, not the most highly connected or the most centrally located ones. BIB003 Formulates a temporal notion of social network distance measuring the minimum time for information to spread across a given vertex pair. Defines a network backbone, a subgraph in which the information flows the quickest. Shows that the network backbone for information propagation on a social network graph is sparse, with a mix of "highly embedded edges and long-range bridges". [ BIB009 Quantifies the causal effect of social networks in disseminating information, by identifying who influences whom, and exploring whether they would propagate the same information if the social signals were absent. Experiments with information sharing behavior of 253 million users. Shows that while stronger ties are more influential at an individual level, the abundance of weak ties are more responsible for novel information propagation. [ Hypothesizes that homophily affects the core mechanism behind social information propagation. Proposes a dynamic Bayesian network for capturing information diffusion. Shows that considering homophily leads to an improvement of 15%-25% in prediction of information diffusion. [ BIB007 Models the global influence of a node, on the "rate of information diffusion through the implicit social network". Proposes Linear Influence Model, in which a newly infected (informed) node is a "function of other nodes infected in the past". Shows that the patterns of influence of individual participants significantly differs, depending on node type and topic of information. [ BIB010 Explores speed, scale and range as major properties of social network information diffusion. Shows that user properties, and the rate at which a user is mentioned, are predictors of information propagation. Shows that information propagation range for an event is higher for tweets made later. BIB011 Observes that information can flow both through online social networks and sources outside the network such as news media. Models information propagation accordingly. Uses hazard functions to quantify external exposure and influence. Applies the model to URLs emerging on Twitter. Shows that, affected by external influence (and not social edges), information jumps across the Twitter network. Quantifies information jump. Shows that 71% of information diffuses over Twitter network, while 29% happens out of the network.