query_id
stringlengths
32
32
query
stringlengths
6
4.09k
positive_passages
listlengths
1
22
negative_passages
listlengths
10
100
subset
stringclasses
7 values
368b5ee483a00e75e00c493cdb4a427a
IoT's Tiny Steps towards 5G: Telco's Perspective
[ { "docid": "48a1e20799ef94432145cefbfb65df25", "text": "The rapidly increasing number of mobile devices, voluminous data, and higher data rate are pushing to rethink the current generation of the cellular mobile communication. The next or fifth generation (5G) cellular networks are expected to meet high-end requirements. The 5G networks are broadly characterized by three unique features: ubiquitous connectivity, extremely low latency, and very high-speed data transfer. The 5G networks would provide novel architectures and technologies beyond state-of-the-art architectures and technologies. In this paper, our intent is to find an answer to the question: “what will be done by 5G and how?” We investigate and discuss serious limitations of the fourth generation (4G) cellular networks and corresponding new features of 5G networks. We identify challenges in 5G networks, new technologies for 5G networks, and present a comparative study of the proposed architectures that can be categorized on the basis of energy-efficiency, network hierarchy, and network types. Interestingly, the implementation issues, e.g., interference, QoS, handoff, security-privacy, channel access, and load balancing, hugely effect the realization of 5G networks. Furthermore, our illustrations highlight the feasibility of these models through an evaluation of existing real-experiments and testbeds.", "title": "" } ]
[ { "docid": "adccd039cc54352eefd855567e8eeb62", "text": "In this paper, we propose a novel classification method for the four types of lung nodules, i.e., well-circumscribed, vascularized, juxta-pleural, and pleural-tail, in low dose computed tomography scans. The proposed method is based on contextual analysis by combining the lung nodule and surrounding anatomical structures, and has three main stages: an adaptive patch-based division is used to construct concentric multilevel partition; then, a new feature set is designed to incorporate intensity, texture, and gradient information for image patch feature description, and then a contextual latent semantic analysis-based classifier is designed to calculate the probabilistic estimations for the relevant images. Our proposed method was evaluated on a publicly available dataset and clearly demonstrated promising classification performance.", "title": "" }, { "docid": "9546092b8db5d22448af61df5f725bbf", "text": "This paper provides a new equivalent circuit model for a spurline filter section in an inhomogeneous coupled-line medium whose even and odd mode phase velocities are unequal. This equivalent circuit permits the exact filter synthesis to be performed easily. Millimeter-wave filters at 26 to 40 GHz and 75 to 110 GHz have been fabricated using the model, and experimental results are included which validate the equivalent circuit model.", "title": "" }, { "docid": "4eca3018852fd3107cb76d1d95f76a0a", "text": "Within the past decade, empirical evidence has emerged supporting the use of Acceptance and Commitment Therapy (ACT) targeting shame and self-stigma. Little is known about the role of self-compassion in ACT, but evidence from other approaches indicates that self-compassion is a promising means of reducing shame and self-criticism. The ACT processes of defusion, acceptance, present moment, values, committed action, and self-as-context are to some degree inherently self-compassionate. However, it is not yet known whether the self-compassion inherent in the ACT approach explains ACT’s effectiveness in reducing shame and stigma, and/or whether focused self-compassion work may improve ACT outcomes for highly self-critical, shame-prone people. We discuss how ACT for shame and stigma may be enhanced by existing approaches specifically targeting self-compassion.", "title": "" }, { "docid": "dbc253488a9f5d272e75b38dc98ea101", "text": "A new form of a hybrid design of a microstrip-fed parasitic coupled ring fractal monopole antenna with semiellipse ground plane is proposed for modern mobile devices having a wireless local area network (WLAN) module along with a Worldwide Interoperability for Microwave Access (WiMAX) function. In comparison to the previous monopole structures, the miniaturized antenna dimension is only about 25 × 25 × 1 mm3 , which is 15 times smaller than the previous proposed design. By only increasing the fractal iterations, very good impedance characteristics are obtained. Throughout this letter, the improvement process of the impedance and radiation properties is completely presented and discussed.", "title": "" }, { "docid": "46f3f27a88b4184a15eeb98366e599ec", "text": "Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.", "title": "" }, { "docid": "3fcb9ab92334e3e214a7db08a93d5acd", "text": "BACKGROUND\nA growing body of literature indicates that physical activity can have beneficial effects on mental health. However, previous research has mainly focussed on clinical populations, and little is known about the psychological effects of physical activity in those without clinically defined disorders.\n\n\nAIMS\nThe present study investigates the association between physical activity and mental health in an undergraduate university population based in the United Kingdom.\n\n\nMETHOD\nOne hundred students completed questionnaires measuring their levels of anxiety and depression using the Hospital Anxiety and Depression Scale (HADS) and their physical activity regime using the Physical Activity Questionnaire (PAQ).\n\n\nRESULTS\nSignificant differences were observed between the low, medium and high exercise groups on the mental health scales, indicating better mental health for those who engage in more exercise.\n\n\nCONCLUSIONS\nEngagement in physical activity can be an important contributory factor in the mental health of undergraduate students.", "title": "" }, { "docid": "60d6869cadebea71ef549bb2a7d7e5c3", "text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.", "title": "" }, { "docid": "7e4a222322346abc281d72534902d707", "text": "Humic substances (HS) have been widely recognized as a plant growth promoter mainly by changes on root architecture and growth dynamics, which result in increased root size, branching and/or greater density of root hair with larger surface area. Stimulation of the H+-ATPase activity in cell membrane suggests that modifications brought about by HS are not only restricted to root structure, but are also extended to the major biochemical pathways since the driving force for most nutrient uptake is the electrochemical gradient across the plasma membrane. Changes on root exudation profile, as well as primary and secondary metabolism were also observed, though strongly dependent on environment conditions, type of plant and its ontogeny. Proteomics and genomic approaches with diverse plant species subjected to HS treatment had often shown controversial patterns of protein and gene expression. This is a clear indication that HS effects of plants are complex and involve non-linear, cross-interrelated and dynamic processes that need be treated with an interdisciplinary view. Being the humic associations recalcitrant to microbiological attack, their use as vehicle to introduce beneficial selected microorganisms to crops has been proposed. This represents a perspective for a sort of new biofertilizer designed for a sustainable agriculture, whereby plants treated with HS become more susceptible to interact with bioinoculants, while HS may concomitantly modify the structure/activity of the microbial community in the rhizosphere compartment. An enhanced knowledge of the effects on plants physiology and biochemistry and interaction with rhizosphere and endophytic microbes should lead to achieve increased crop productivity through a better use of HS inputs in Agriculture.", "title": "" }, { "docid": "5dac8ef81c7a6c508c603b3fd6a87581", "text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "title": "" }, { "docid": "ccfba22a7697a9deaedbb7d1ceebbc33", "text": "The Machine Learning field evolved from the broad field of Artificial Intelligence, which aims to mimic intelligent abilities of humans by machines. In the field of Machine Learningone considers the important question of how to make machines able to “learn”. Learning in this context is understood as inductive inference , where one observesexamplesthat represent incomplete information about some “statistical phenomenon”. Inunsupervisedlearning one typically tries to uncover hidden regularities (e.g. clusters) or to detect anomalies in the data (for instance some unusual machine function or a network intrusion). Insupervised learning , there is alabel associated with each example. It is supposed to be the answer to a question about the example. If the label is discrete, then the task is called classification problem– otherwise, for realvalued labels we speak of a regression problem. Based on these examples (including the labels), one is particularly interested to predict the answer for other cases before they are explicitly observed. Hence, learning is not only a question of remembering but also ofgeneralization to unseen cases .", "title": "" }, { "docid": "5245cdc023c612de89f36d1573d208fe", "text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.", "title": "" }, { "docid": "79f7f7294f23ab3aace0c4d5d589b4a8", "text": "Along with the expansion of globalization, multilingualism has become a popular social phenomenon. More than one language may occur in the context of a single conversation. This phenomenon is also prevalent in China. A huge variety of informal Chinese texts contain English words, especially in emails, social media, and other user generated informal contents. Since most of the existing natural language processing algorithms were designed for processing monolingual information, mixed multilingual texts cannot be well analyzed by them. Hence, it is of critical importance to preprocess the mixed texts before applying other tasks. In this paper, we firstly analyze the phenomena of mixed usage of Chinese and English in Chinese microblogs. Then, we detail the proposed two-stage method for normalizing mixed texts. We propose to use a noisy channel approach to translate in-vocabulary words into Chinese. For better incorporating the historical information of users, we introduce a novel user aware neural network language model. For the out-of-vocabulary words (such as pronunciations, informal expressions and et al.), we propose to use a graph-based unsupervised method to categorize them. Experimental results on a manually annotated microblog dataset demonstrate the effectiveness of the proposed method. We also evaluate three natural language parsers with and without using the proposed method as the preprocessing step. From the results, we can see that the proposed method can significantly benefit other NLP tasks in processing mixed text.", "title": "" }, { "docid": "3abcfd48703b399404126996ca837f90", "text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power", "title": "" }, { "docid": "63cfadd9a71aaa1cbe1ead79f943f83c", "text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <[email protected]>, Quanzheng Li <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "58f9c7fd920d7a3c70321afa2aa5794b", "text": "Retrieval of the phase of a signal is one of the major problems in signal processing. For an exact signal reconstruction, both magnitude, and phase spectrum of the signal is required. In many speech-based applications, only the magnitude spectrum is processed and the phase is ignored, which leads to degradation in the performance. Here, we propose a novel technique that enables the reconstruction of the speech signal from magnitude spectrum only. We consider the even-odd part decomposition of a causal sequence and process only on the real part of the DTFT of the signal. We propose the shifting of the real part of DTFT of the sequence to make it non-negative. By adding a constant of sufficient value to the real part of the DTFT, the exact signal reconstruction is possible from the magnitude or power spectrum alone. Moreover, we have compared our proposed approach with recently proposed phase retrieval method from magnitude spectrum of the Causal Delta Dominant (CDD) signal. We found that the method of phase retrieval from CDD signal and proposed method are identical under certain approximation. However, proposed method involves the less computational cost for the exact processing of the signal.", "title": "" }, { "docid": "cd892dec53069137c1c2cfe565375c62", "text": "Optimal application performance on a Distributed Object Based System (DOBS) requires class fragmentation and the development of allocation schemes to place fragments at distributed sites so data transfer is minimized. Fragmentation enhances application performance by reducing the amount of irrelevant data accessed and the amount of data transferred unnecessarily between distributed sites. Algorithms for effecting horizontal and vertical fragmentation ofrelations exist, but fragmentation techniques for class objects in a distributed object based system are yet to appear in the literature. This paper first reviews a taxonomy of the fragmentation problem in a distributed object base. The paper then contributes by presenting a comprehensive set of algorithms for horizontally fragmenting the four realizable class models on the taxonomy. The fundamental approach is top-down, where the entity of fragmentation is the class object. Our approach consists of first generating primary horizontal fragments of a class based on only applications accessing this class, and secondly generating derived horizontal fragments of the class arising from primary fragments of its subclasses, its complex attributes (contained classes), and/or its complex methods classes. Finally, we combine the sets of primary and derived fragments of each class to produce the best possible fragments. Thus, these algorithms account for inheritance and class composition hierarchies as well as method nesting among objects, and are shown to be polynomial time.", "title": "" }, { "docid": "4fac911d679240b84decef6618b97b4b", "text": "A floating-gate current-output analog memory is implemented in a 0.13-μm digital CMOS process. The proposed memory cell achieves random-accessible and bidirectional updates with a sigmoid update rule. A novel writing scheme is proposed to obtain tunneling selectivity without on-chip highvoltage switches or charge pumps, and reduces interconnections and pin count. Parameters of empirical models for floating gate charge modification are extracted from measurements. Measurement and simulation results show that the proposed memory consumes 45 nW of power, has a 7-bit programming resolution, 53.8 dB dynamic range and 86.5 dB writing isolation.", "title": "" }, { "docid": "354bc052f75e7884baca157492f5004c", "text": "This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system-introduced in this paper and fully described elsewhere-may help to overcome the problem of variety in big data; it has potential as a universal framework for the representation and processing of diverse kinds of knowledge, helping to reduce the diversity of formalisms and formats for knowledge, and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualization of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.", "title": "" }, { "docid": "9d73ff3f8528bb412c585d802873fcb4", "text": "In this work, we introduce a novel interpretation of residual networks showing they are exponential ensembles. This observation is supported by a large-scale lesion study that demonstrates they behave just like ensembles at test time. Subsequently, we perform an analysis showing these ensembles mostly consist of networks that are each relatively shallow. For example, contrary to our expectations, most of the gradient in a residual network with 110 layers comes from an ensemble of very short networks, i.e., only 10-34 layers deep. This suggests that in addition to describing neural networks in terms of width and depth, there is a third dimension: multiplicity, the size of the implicit ensemble. Ultimately, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by ensembling many short networks together. This insight reveals that depth is still an open research question and invites the exploration of the related notion of multiplicity.", "title": "" } ]
scidocsrr
7376385e8b2bbcc41a0bc809cc806f5f
Isolation and Emotions in the Workplace: The Influence of Perceived Media Richness and Virtuality
[ { "docid": "4506bc1be6e7b42abc34d79dc426688a", "text": "The growing interest in Structured Equation Modeling (SEM) techniques and recognition of their importance in IS research suggests the need to compare and contrast different types of SEM techniques so that research designs can be selected appropriately. After assessing the extent to which these techniques are currently being used in IS research, the article presents a running example which analyzes the same dataset via three very different statistical techniques. It then compares two classes of SEM: covariance-based SEM and partial-least-squaresbased SEM. Finally, the article discusses linear regression models and offers guidelines as to when SEM techniques and when regression techniques should be used. The article concludes with heuristics and rule of thumb thresholds to guide practice, and a discussion of the extent to which practice is in accord with these guidelines.", "title": "" }, { "docid": "e4d38d8ef673438e9ab231126acfda99", "text": "The trend toward physically dispersed work groups has necessitated a fresh inquiry into the role and nature of team leadership in virtual settings. To accomplish this, we assembled thirteen culturally diverse global teams from locations in Europe, Mexico, and the United States, assigning each team a project leader and task to complete. The findings suggest that effective team leaders demonstrate the capability to deal with paradox and contradiction by performing multiple leadership roles simultaneously (behavioral complexity). Specifically, we discovered that highly effective virtual team leaders act in a mentoring role and exhibit a high degree of understanding (empathy) toward other team members. At the same time, effective leaders are also able to assert their authority without being perceived as overbearing or inflexible. Finally, effective leaders are found to be extremely effective at providing regular, detailed, and prompt communication with their peers and in articulating role relationships (responsibilities) among the virtual team members. This study provides useful insights for managers interested in developing global virtual teams, as well as for academics interested in pursuing virtual team research. 8 KAYWORTH AND LEIDNER", "title": "" }, { "docid": "aff9d415a725b9e1ea65897af2715729", "text": "Survey research is believed to be well understood and applied by MIS scholars. It has been applied for several years, it is well defined, and it has precise procedures which, when followed closely, yield valid and easily interpretable data. Our assessment of the use of survey research in the MIS field between 1980 and 1990 indicates that this perception is at odds with reality. Our analysis indicates that survey methodology is often misapplied and is plagued by five important weaknesses: (1) single method designs where multiple methods are needed, (2) unsystematic and often inadequate sampling procedures, (3) low response rates, (4) weak linkages between units of analysis and respondents, and (5) over reliance on cross-sectional surveys where longitudinal surveys are really needed. Our assessment also shows that the quality of survey research varies considerably among studies of different purposes: explanatory studies are of good quality overall, exploratory and descriptive studies are of moderate to poor quality. This article presents a general framework for classifying and examining survey research and uses this framework to assess, review and critique the usage of survey research conducted in the past decade in the MIS field. In an effort to improve the quality of survey research, this article makes specific recommendations that directly address the major problems highlighted in the review. AUTHORS' BIOGRAPHIES Alain Pinsonneault holds a Ph.d. in administration from University of California at Irvine (1990) and a M.Sc. in Management Information Systems from Ecole des Hautes Etudes Commerciales de Montreal (1986). His current research interests include the organizational implications of computing, especially with regard to the centralization/decentralization of decision making authority and middle managers workforce; the strategic and political uses of computing, the use of information technology to support group decision making process; and the benefits of computing. He has published articles in Decision Support Systems, European Journal of Operational Research, and in Management Information Systems Quarterly, and one book chapter. He has also given numerous conferences and he is an associate editor of Informatization and the Public Sector journal. His doctoral dissertation won the 1990 International Center for Information Technology Doctoral Award. Kenneth L. Kraemer is the Director of the Public Policy Research Organization and Professor of Management and Information and Computer Science. He holds a Ph.D. from University of Southern California. Professor Kraemer has conducted research into the management of computing in organizations for more than 20 years. He is currently studying the diffusion of computing in Asia-Pacific countries, the dynamics of computing development in organizations, the impacts of computing on productivity in the work environment, and policies for successful implementation of computer-based information systems. In addition, Professor Kraemer is coeditor of a series of books entitled Computing, Organization, Policy, and Society (CORPS) published by Columbia University Press. He has published numerous books on computing, the most recent of which being Managing Information Systems. He has served as a consultant to the Department of Housing and Urban Development, the Office of Technology Assessment and the United Nations, and as a national expert to the Organization for Economic Cooperation and Development. He was recently Shaw Professor in Information Systems and Computer Sciences at the National University of Singapore.", "title": "" } ]
[ { "docid": "f845508acabb985dd80c31774776e86b", "text": "In this paper, we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.", "title": "" }, { "docid": "2f3734b49e9d2e6ea7898622dac8a296", "text": "Dropout prediction in MOOCs is a well-researched problem where we classify which students are likely to persist or drop out of a course. Most research into creating models which can predict outcomes is based on student engagement data. Why these students might be dropping out has only been studied through retroactive exit surveys. This helps identify an important extension area to dropout prediction— how can we interpret dropout predictions at the student and model level? We demonstrate how existing MOOC dropout prediction pipelines can be made interpretable, all while having predictive performance close to existing techniques. We explore each stage of the pipeline as design components in the context of interpretability. Our end result is a layer which longitudinally interprets both predictions and entire classification models of MOOC dropout to provide researchers with in-depth insights of why a student is likely to dropout.", "title": "" }, { "docid": "c2659be74498ec68c3eb5509ae11b3c3", "text": "We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.", "title": "" }, { "docid": "89281eed8f3faadcf0bc07bd151728a4", "text": "The Internet of Things (IoT) continues to increase in popularity as more “smart” devices are released and sold every year. Three protocols in particular, Zigbee, Z-wave, and Bluetooth Low Energy (BLE) are used for network communication on a significant number of IoT devices. However, devices utilizing each of these three protocols have been compromised due to either implementation failures by the manufacturer or security shortcomings in the protocol itself. This paper identifies the security features and shortcomings of each protocol citing employed attacks for reference. Additionally, it will serve to help manufacturers make two decisions: First, should they invest in creating their own protocol, and second, if they decide against this, which protocol should they use and how should they implement it to ensure their product is as secure as it can be. These answers are made with respect to the specific factors manufacturers in the IoT space face such as the reversed CIA model with availability usually being the most important of the three and the ease of use versus security tradeoff that manufacturers have to consider. This paper finishes with a section aimed at future research for IoT communication protocols.", "title": "" }, { "docid": "4159f4f92adea44577319e897f10d765", "text": "While our knowledge about ancient civilizations comes mostly from studies in archaeology and history books, much can also be learned or confirmed from literary texts . Using natural language processing techniques, we present aspects of ancient China as revealed by statistical textual analysis on the Complete Tang Poems , a 2.6-million-character corpus of all surviving poems from the Tang Dynasty (AD 618 —907). Using an automatically created treebank of this corpus , we outline the semantic profiles of various poets, and discuss the role of s easons, geography, history, architecture, and colours , as observed through word selection and dependencies.", "title": "" }, { "docid": "a08aa88aa3b4249baddbd8843e5c9be3", "text": "We present the design, implementation, evaluation, and user ex periences of theCenceMe application, which represents the first system that combines the inference of the presence of individuals using off-the-shelf, sensor-enabled mobile phones with sharing of this information through social networking applications such as Facebook and MySpace. We discuss the system challenges for the development of software on the Nokia N95 mobile phone. We present the design and tradeoffs of split-level classification, whereby personal sensing presence (e.g., walking, in conversation, at the gym) is derived from classifiers which execute in part on the phones and in part on the backend servers to achieve scalable inference. We report performance measurements that characterize the computational requirements of the software and the energy consumption of the CenceMe phone client. We validate the system through a user study where twenty two people, including undergraduates, graduates and faculty, used CenceMe continuously over a three week period in a campus town. From this user study we learn how the system performs in a production environment and what uses people find for a personal sensing system.", "title": "" }, { "docid": "e742aa091dae6227994cffcdb5165769", "text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.", "title": "" }, { "docid": "63db10a21fcfc659e350d5bf6df47166", "text": "This research proposes the design, simulation, and implementation of a three-phase induction motor driver, using voltage-fed Space Vector Pulse Width Modulation technique (SVPWM), which is an advance and modern technique. The SVPWM provides maximum usage of the DC link. A MATLAB/SIMULINK program is prepared for simulating the overall drive system which include; voltage-fed space vector PWM inverter model and three-phase induction motor model. A practical model is designed by imitate the conceptions of TMS320 (DSP) microcontroller. This practical model is completely implemented and exact results are obtained. The worst state of the harmonics content of the voltage and current (no-load condition) are analyzed. This analysis shows high reduction in the dominant harmonics and very low total harmonic distortion (THD) when SVPWM is used (less than 5%), compared to (more than 20%) in square wave. Experimental and simulation results have verified the superior performance and the effectiveness in reduction the harmonic losses and switching losses.", "title": "" }, { "docid": "3abcfd48703b399404126996ca837f90", "text": "Various inductive loads used in all industries deals with the problem of power factor improvement. Capacitor bank connected in shunt helps in maintaining the power factor closer to unity. They improve the electrical supply quality and increase the efficiency of the system. Also the line losses are also reduced. Shunt capacitor banks are less costly and can be installed anywhere. This paper deals with shunt capacitor bank designing for power factor improvement considering overvoltages for substation installation. Keywords— Capacitor Bank, Overvoltage Consideration, Power Factor, Reactive Power", "title": "" }, { "docid": "63b04046e1136290a97f885783dda3bd", "text": "This paper considers the design of secondary wireless mesh networks which use leased frequency channels. In a given geographic region, the available channels are individually priced and leased exclusively through a primary spectrum owner. The usage of each channel is also subject to published interference constraints so that the primary user is not adversely affected. When the network is designed and deployed, the secondary user would like to minimize the costs of using the required resources while satisfying its own traffic and interference requirements. This problem is formulated as a mixed integer optimization which gives the optimum deployment cost as a function of the secondary node positioning, routing, and frequency allocations. Because of the problem's complexity, the optimum result can only be found for small problem sizes. To accommodate more practical deployments, two algorithms are proposed and their performance is compared to solutions obtained from the optimization. The first algorithm is a greedy flow-based scheme (GFB) which iterates over the individual node flows based on solving a much simpler optimization at each step. The second algorithm (ILS) uses an iterated local search whose initial solution is based on constrained shortest path routing. Our results show that the proposed algorithms perform well for a variety of network scenarios.", "title": "" }, { "docid": "e2a97f90f42dcaf5b8b703c5eb47a757", "text": "Metamaterials (MMs) have been proposed to improve the performance of wireless power transfer (WPT) systems. The performance of identical unit cells having the same transmitter and receiver self-resonance is presented in the literature. This paper presents the optimization of tunable MM for performance improvement in WPT systems. Furthermore, a figure of merit (FOM) is proposed for the optimization of WPT systems with MMs. It is found that both transferred power and power transfer efficiency can be improved significantly by using the proposed FOM and tunable MM, particularly under misaligned conditions.", "title": "" }, { "docid": "447bfee37117b77534abe2cf6cfd8a17", "text": "Detailed characterization of the cell types in the human brain requires scalable experimental approaches to examine multiple aspects of the molecular state of individual cells, as well as computational integration of the data to produce unified cell-state annotations. Here we report improved high-throughput methods for single-nucleus droplet-based sequencing (snDrop-seq) and single-cell transposome hypersensitive site sequencing (scTHS-seq). We used each method to acquire nuclear transcriptomic and DNA accessibility maps for >60,000 single cells from human adult visual cortex, frontal cortex, and cerebellum. Integration of these data revealed regulatory elements and transcription factors that underlie cell-type distinctions, providing a basis for the study of complex processes in the brain, such as genetic programs that coordinate adult remyelination. We also mapped disease-associated risk variants to specific cellular populations, which provided insights into normal and pathogenic cellular processes in the human brain. This integrative multi-omics approach permits more detailed single-cell interrogation of complex organs and tissues.", "title": "" }, { "docid": "ccc6651b9bf4fcaa905d8e1bc7f9b6b4", "text": "We introduce computational network (CN), a unified framework for describing arbitrary learning machines, such as deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short term memory (LSTM), logistic regression, and maximum entropy model, that can be illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation upon its children. We describe algorithms to carry out forward computation and gradient calculation in CN and introduce most popular computation node types used in a typical CN. We further introduce the computational network toolkit (CNTK), an implementation of CN that supports both GPU and CPU. We describe the architecture and the key components of the CNTK, the command line options to use CNTK, and the network definition and model editing language, and provide sample setups for acoustic model, language model, and spoken language understanding. We also describe the Argon speech recognition decoder as an example to integrate with CNTK.", "title": "" }, { "docid": "d35bc5ef2ea3ce24bbba87f65ae93a25", "text": "Fog computing, complementary to cloud computing, has recently emerged as a new paradigm that extends the computing infrastructure from the center to the edge of the network. This article explores the design of a fog computing orchestration framework to support IoT applications. In particular, we focus on how the widely adopted cloud computing orchestration framework can be customized to fog computing systems. We first identify the major challenges in this procedure that arise due to the distinct features of fog computing. Then we discuss the necessary adaptations of the orchestration framework to accommodate these challenges.", "title": "" }, { "docid": "5d1fbf1b9f0529652af8d28383ce9a34", "text": "Automatic License Plate Recognition (ALPR) is one of the most prominent tools in intelligent transportation system applications. In ALPR algorithm implementation, License Plate Detection (LPD) is a critical stage. Despite many state-of-the-art researches, some parameters such as low/high illumination, type of camera, or a different style of License Plate (LP) causes LPD step is still a challenging problem. In this paper, we propose a new style-free method based on the cross power spectrum. Our method has three steps; designing adaptive binarized filter, filtering using cross power spectrum and verification. Experimental results show that the recognition accuracy of the proposed approach is 98% among 2241 Iranian cars images including two styles of the LP. In addition, the process of the plate detection takes 44 milliseconds, which is suitable for real-time processing.", "title": "" }, { "docid": "a9f70ea201e17bca3b97f6ef7b2c1c15", "text": "Network embedding task aims at learning low-dimension latent representations of vertices while preserving the structure of a network simultaneously. Most existing network embedding methods mainly focus on static networks, which extract and condense the network information without temporal information. However, in the real world, networks keep evolving, where the linkage states between the same vertex pairs at consequential timestamps have very close correlations. In this paper, we propose to study the network embedding problem and focus on modeling the linkage evolution in the dynamic network setting. To address this problem, we propose a deep dynamic network embedding method. More specifically, the method utilizes the historical information obtained from the network snapshots at past timestamps to learn latent representations of the future network. In the proposed embedding method, the objective function is carefully designed to incorporate both the network internal and network dynamic transition structures. Extensive empirical experiments prove the effectiveness of the proposed model on various categories of real-world networks, including a human contact network, a bibliographic network, and e-mail networks. Furthermore, the experimental results also demonstrate the significant advantages of the method compared with both the state-of-the-art embedding techniques and several existing baseline methods.", "title": "" }, { "docid": "1a259f28221e8045568e5053ddc4ede1", "text": "The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one distributed learning algorithm which extends the original(centralized) CHAID algorithm to its distributed version. This distributed algorithm generates exactly the same results as its centralized counterpart. For completeness, a distributed quantization method is proposed so that continuous data can be processed by our algorithm. Experimental results for several well known data sets are presented and compared with decision trees generated using CHAID with centrally stored data.", "title": "" }, { "docid": "e0450f09c579ddda37662cbdfac4265c", "text": "Deep neural networks (DNNs) have recently achieved a great success in various learning task, and have also been used for classification of environmental sounds. While DNNs are showing their potential in the classification task, they cannot fully utilize the temporal information. In this paper, we propose a neural network architecture for the purpose of using sequential information. The proposed structure is composed of two separated lower networks and one upper network. We refer to these as LSTM layers, CNN layers and connected layers, respectively. The LSTM layers extract the sequential information from consecutive audio features. The CNN layers learn the spectro-temporal locality from spectrogram images. Finally, the connected layers summarize the outputs of two networks to take advantage of the complementary features of the LSTM and CNN by combining them. To compare the proposed method with other neural networks, we conducted a number of experiments on the TUT acoustic scenes 2016 dataset which consists of recordings from various acoustic scenes. By using the proposed combination structure, we achieved higher performance compared to the conventional DNN, CNN and LSTM architecture.", "title": "" }, { "docid": "e668f84e16a5d17dff7d638a5543af82", "text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.", "title": "" }, { "docid": "82c6906aec894bde04e773ebf4961864", "text": "OBJECTIVE\nTo identify the biomechanical feasibility of the thoracic extrapedicular approach to the placement of screws.\n\n\nMETHODS\nFive fresh adult cadaveric thoracic spine from T1 to T8 were harvested. The screw was inserted either by pedicular approach or extrapedicular approach. The result was observed and the pullout strength by pedicular screw approach and extrapedicular screw approach via sagittal axis of the vertebrale was measured and compared statistically.\n\n\nRESULTS\nIn thoracic pedicular approach, the pullout strength of pedicle screw was 1001.23 N+/-220 N (288.2-1561.7 N)ls and that of thoracic extrapedicular screw approach was 827.01 N+/-260 N when screw was inserted into the vertebrae through transverse process, and 954.25 N+/-254 N when screw was inserted into the vertebrae through the lateral cortex of the pedicle. Compared with pedicular group, the pullout strength in extrapedicular group was decreased by 4.7% inserted through transverse process (P larger than 0.05) and by 17.3% inserted through the lateral cortex (P less than 0.05). The mean pullout strength by extrapedicular approach was decreased by 11.04% as compared with pedicular approach (P less than 0.05).\n\n\nCONCLUSIONS\nIt is feasible biomechanically to use extrapedicular screw technique to insert pedicular screws in the thoracic spine when it is hard to insert by pedicular approach.", "title": "" } ]
scidocsrr
a41f3178dd5aa19ea156e5e68d392e7c
Review of hardware cost estimation methods, models and tools applied to early phases of space mission planning
[ { "docid": "f052fae696370910cc59f48552ddd889", "text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.", "title": "" } ]
[ { "docid": "e69f71cc98bce195d0cfb77ecdc31088", "text": "Wheat grass juice is the juice extracted from the pulp of wheat grass and has been used as a general-purpose health tonic for several years. Several of our patients in the thalassemia unit began consuming wheat grass juice after anecdotal accounts of beneficial effects on transfusion requirements. These encouraging experiences prompted us to evaluate the effect of wheat grass juice on transfusion requirements in patients with transfusion dependent beta thalassemia. Families of patients raised the wheat grass at home in kitchen garden/pots. The patients consumed about 100 mL of wheat grass juice daily. Each patient acted as his own control. Observations recorded during the period of intake of wheat grass juice were compared with one-year period preceding it. Variables recorded were the interval between transfusions, pre-transfusion hemoglobin, amount of blood transfused and the body weight. A beneficial effect of wheat grass juice was defined as decrease in the requirement of packed red cells (measured as grams/Kg body weight/year) by 25% or more. 16 cases were analyzed. Blood transfusion requirement fell by >25% in 8 (50%) patients with a decrease of >40% documented in 3 of these. No perceptible adverse effects were recognized.", "title": "" }, { "docid": "a2622b1e0c1c58a535ec11a5075d1222", "text": "The condition of a machine can automatically be identified by creating and classifying features that summarize characteristics of measured signals. Currently, experts, in their respective fields, devise these features based on their knowledge. Hence, the performance and usefulness depends on the expert's knowledge of the underlying physics or statistics. Furthermore, if new and additional conditions should be detectable, experts have to implement new feature extraction methods. To mitigate the drawbacks of feature engineering, a method from the subfield of feature learning, i.e., deep learning (DL), more specifically convolutional neural networks (NNs), is researched in this paper. The objective of this paper is to investigate if and how DL can be applied to infrared thermal (IRT) video to automatically determine the condition of the machine. By applying this method on IRT data in two use cases, i.e., machine-fault detection and oil-level prediction, we show that the proposed system is able to detect many conditions in rotating machinery very accurately (i.e., 95 and 91.67% accuracy for the respective use cases), without requiring any detailed knowledge about the underlying physics, and thus having the potential to significantly simplify condition monitoring using complex sensor data. Furthermore, we show that by using the trained NNs, important regions in the IRT images can be identified related to specific conditions, which can potentially lead to new physical insights.", "title": "" }, { "docid": "53aeddc466479c710c132a19513426f6", "text": "This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback-Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action-constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution-that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.", "title": "" }, { "docid": "5fd0b013ee2778ac6328729566eb1481", "text": "As more and more virtual machines (VM) are packed into a physical machine, refactoring common kernel components shared by the virtual machines running on the same physical machine significantly reduces the overall resource consumption. A refactored kernel component typically runs on a special VM called a virtual appliance. Because of the semantics gap in Hardware Abstraction Layer (HAL)-based virtualization, a physical machine's virtual appliance requires the support of per-VM in-guest agents to perform VM-specific operations such as kernel data structure access and modification. To simplify deployment, these agents must be injected into guest virtual machines without requiring any manual installation. Moreover, it is essential to protect the integrity of in-guest agents at run time, especially when the underlying refactored kernel service is security-related. This paper describes the design, implementation and evaluation of a surreptitious kernel agent deployment and execution mechanism called SADE that requires zero installation effort and effectively hides the execution of agent code. To demonstrate the efficacy of SADE, we describe a signature-based memory scanning virtual appliance that uses SADE to inject its in-guest kernel agents without any support from the injected virtual machine, and show that both the start-up overhead and the run-time performance penalty of SADE are quite modest in practice.", "title": "" }, { "docid": "28016e339bab5c1f5daa6bf26c3a06dd", "text": "In this paper, we propose a straightforward solution to the problems of compositional parallel programming by using skeletons as the uniform mechanism for structured composition. In our approach parallel programs are constructed by composing procedures in a conventional base language using a set of high-level, pre-defined, functional, parallel computational forms known as skeletons. The ability to compose skeletons provides us with the essential tools for building further and more complex application-oriented skeletons specifying important aspects of parallel computation. Compared with the process network based composition approach, such as PCN, the skeleton approach abstracts away the fine details of connecting communication ports to the higher level mechanism of making data distributions conform, thus avoiding the complexity of using lower level ports as the means of interaction. Thus, the framework provides a natural integration of the compositional programming approach with the data parallel programming paradigm.", "title": "" }, { "docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df", "text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.", "title": "" }, { "docid": "f60dfa21c052672d526e326c29be9447", "text": "The intense competition that accompanied the growth of internet-based companies ushered in the era of 'big data' characterized by major innovations in processing of very large amounts of data and the application of advanced analytics including data mining and machine learning. Healthcare is on the cusp of its own era of big data, catalyzed by the changing regulatory and competitive environments, fueled by growing adoption of electronic health records, as well as efforts to integrate medical claims, electronic health records and other novel data sources. Applying the lessons from big data pioneers will require healthcare and life science organizations to make investments in new hardware and software, as well as in individuals with different skills. For life science companies, this will impact the entire pharmaceutical value chain from early research to postcommercialization support. More generally, this will revolutionize comparative effectiveness research.", "title": "" }, { "docid": "b81c04540487e09937401130ccd53ee2", "text": "Some design and operation aspects of axial flux permanent magnet synchronous machines, wound with concentrated coils, are presented. Due to their high number of poles, compactness, and excellent waveform quality and efficiency, these machines show satisfactory operation at low speeds, both as direct drive generators and as motors. In this paper, after a general analysis of the model and design features of this kind of machine, the attention is focused on wind power generation: The main sizing equations are defined, and the most relevant figures of merit are examined by means of a suitable parametric analysis. Some experimental results obtained by testing a three-phase, 50-kW, and 70-rpm prototype are presented and discussed, validating the modeling theory and the design procedure.", "title": "" }, { "docid": "0ff8c4799b62c70ef6b7d70640f1a931", "text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.", "title": "" }, { "docid": "8d092dfa88ba239cf66e5be35fcbfbcc", "text": "We present VideoWhisper, a novel approach for unsupervised video representation learning. Based on the observation that the frame sequence encodes the temporal dynamics of a video (e.g., object movement and event evolution), we treat the frame sequential order as a self-supervision to learn video representations. Unlike other unsupervised video feature learning methods based on frame-level feature reconstruction that is sensitive to visual variance, VideoWhisper is driven by a novel video “sequence-to-whisper” learning strategy. Specifically, for each video sequence, we use a prelearned visual dictionary to generate a sequence of high-level semantics, dubbed “whisper,” which can be considered as the language describing the video dynamics. In this way, we model VideoWhisper as an end-to-end sequence-to-sequence learning model using attention-based recurrent neural networks. This model is trained to predict the whisper sequence and hence it is able to learn the temporal structure of videos. We propose two ways to generate video representation from the model. Through extensive experiments on two real-world video datasets, we demonstrate that video representation learned by V ideoWhisper is effective to boost fundamental multimedia applications such as video retrieval and event classification.", "title": "" }, { "docid": "e9b942c71646f2907de65c2641329a66", "text": "In many vision based application identifying moving objects is important and critical task. For different computer vision application Background subtraction is fast way to detect moving object. Background subtraction separates the foreground from background. However, background subtraction is unable to remove shadow from foreground. Moving cast shadow associated with moving object also gets detected making it challenge for video surveillance. The shadow makes it difficult to detect the exact shape of object and to recognize the object.", "title": "" }, { "docid": "93ae39ed7b4d6b411a2deb9967e2dc7d", "text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.", "title": "" }, { "docid": "7d604a9daef9b10c31ac74ecc60bd690", "text": "Sentiment analysis is treated as a classification task as it classifies the orientation of a text into either positive or negative. This paper describes experimental results that applied Support Vector Machine (SVM) on benchmark datasets to train a sentiment classifier. N-grams and different weighting scheme were used to extract the most classical features. It also explores Chi-Square weight features to select informative features for the classification. Experimental analysis reveals that by using Chi-Square feature selection may provide significant improvement on classification accuracy.", "title": "" }, { "docid": "f20f924fc0e975e0a4b2107692e6bd4c", "text": "One of the ultimate goals of open ended learning systems is to take advantage of experience to get a future benefit. We can identify two levels in learning. One builds directly over the data : it captures the pattern and regularities which allow for reliable predictions on new samples. The other starts from such an obtained source knowledge and focuses on how to generalize it to new target concepts : this is also known as learning to learn. Most of the existing machine learning methods stop at the first level and are able of reliable future decisions only if a large amount of training samples is available. This work is devoted to the second level of learning and focuses on how to transfer information from prior knowledge, exploiting it on a new learning problem with possibly scarce labeled data. We propose several algorithmic solutions by leveraging over prior models or features. One possibility is to constrain any target learning model to be close to the linear combination of several source models. Alternatively the prior knowledge can be used as an expert which judges over the target samples and considers the obtained output as an extra feature descriptor. All the proposed approaches evaluate automatically the relevance of prior knowledge and decide from where and how much to transfer without any need of external supervision or heuristically hand tuned parameters. A thorough experimental analysis shows the effectiveness of the defined methods both in case of interclass transfer and for adaptation across different domains. The last part of this work is dedicated to moving forward knowledge transfer towards life long learning. We show how to combine transfer and online learning to obtain a method which processes continuously new data guided by information acquired in the past. We also present an approach to exploit the large variety of existing visual data resources every time it is necessary to solve a new situated learning problem. We propose an image representation that decomposes orthogonally into a specific and a generic part. The last one can be used as an un-biased reference knowledge for future learning tasks.", "title": "" }, { "docid": "907883af0e81f4157e81facd4ff4344c", "text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 ([email protected]) and consumes 9.5mW at 3.125GBaud.", "title": "" }, { "docid": "165522fd4d416fa0b1aeef37f816b1e7", "text": "Tarsal tunnel syndrome, unlike its similar sounding counterpart in the hand, is a significantly misunderstood clinical entity. Confusion concerning the anatomy involved, the presenting symptomatology, the appropriateness and significance of various diagnostic tests, conservative and surgical management, and, finally, the variability of reported results of surgical intervention attests to the lack of consensus surrounding this condition. The terminology involved in various diagnoses for chronic heel pain is also a hodgepodge of poorly understood entities.", "title": "" }, { "docid": "d4d9948e170edd124c57742d91a5d021", "text": "The attribute set in an information system evolves in time when new information arrives. Both lower and upper approximations of a concept will change dynamically when attributes vary. Inspired by the former incremental algorithm in Pawlak rough sets, this paper focuses on new strategies of dynamically updating approximations in probabilistic rough sets and investigates four propositions of updating approximations under probabilistic rough sets. Two incremental algorithms based on adding attributes and deleting attributes under probabilistic rough sets are proposed, respectively. The experiments on five data sets from UCI and a genome data with thousand attributes validate the feasibility of the proposed incremental approaches. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f84e0d8892d0b9d0b108aa5dcf317037", "text": "We present a continuously adaptive, continuous query (CACQ) implementation based on the eddy query processing framework. We show that our design provides significant performance benefits over existing approaches to evaluating continuous queries, not only because of its adaptivity, but also because of the aggressive cross-query sharing of work and space that it enables. By breaking the abstraction of shared relational algebra expressions, our Telegraph CACQ implementation is able to share physical operators --- both selections and join state --- at a very fine grain. We augment these features with a grouped-filter index to simultaneously evaluate multiple selection predicates. We include measurements of the performance of our core system, along with a comparison to existing continuous query approaches.", "title": "" }, { "docid": "19a538b6a49be54b153b0a41b6226d1f", "text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.", "title": "" }, { "docid": "428c480be4ae3d2043c9f5485087c4af", "text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.", "title": "" } ]
scidocsrr
cc4e817d9ae057adac28d67ad193aec7
Manipulating Virtual Objects with Your Hands: A Case Study on Applying Desktop Augmented Reality at the Primary School
[ { "docid": "f0576fd779d494a64068ec6727af8926", "text": "In this paper we report on an initial survey of user evaluation techniques used in Augmented Reality (AR) research. To identify all papers which include AR evaluations we reviewed research publications between the years 1993 and 2007 from online databases of selected scientific publishers. Starting with a total of 6071 publications we filtered the articles in several steps which resulted in 165 AR related publications with user evaluations. These publications were classified in two different ways: according to the evaluation type used following an earlier literature survey classification scheme; and according to the evaluation methods or approach used. We present the results of out literature survey, provide a comprehensive list of references of the selected publications, and discuss some possible research opportunities for future work.", "title": "" } ]
[ { "docid": "8bfba0167301300a1886b00aff39b971", "text": "This paper discusses a tone pronunciation scoring system of Mandarin. It recognizes tones of syllables by using GMM model and uses the recognition results for tone assessment. Initially, experiment results are bad on strongly accented speech. There are two reasons: one is that the inaccurate force-alignment leads to incomplete F0 contours; the other is due to the special pattern of F0 contours. We propose several measures to the problems. The first is to make the extraction of F0 contour independent of the force-alignment. The second is to base the scoring on GMM posterior probabilities. The third is to use the same accented speech to train the GMM model. And the last is to train the fractionized bi-tone GMM models to cover tone changes in the multiplecharacter words. After these measures are taken, the tone scoring correct rate is improved from 60.2% to 83.3%.", "title": "" }, { "docid": "bb008d90a8e5ea4262afc0cf784ccbb8", "text": "*Correspondence to: Michaël Messaoudi; Email: [email protected] In a recent clinical study, we demonstrated in the general population that Lactobacillus helveticus R0052 and Bifidobacterium longum R0175 (PF) taken in combination for 30 days decreased the global scores of hospital anxiety and depression scale (HADs), and the global severity index of the Hopkins symptoms checklist (HSCL90), due to the decrease of the sub-scores of somatization, depression and angerhostility spheres. Therefore, oral intake of PF showed beneficial effects on anxiety and depression related behaviors in human volunteers. From there, it is interesting to focus on the role of this probiotic formulation in the subjects with the lowest urinary free cortisol levels at baseline. This addendum presents a secondary analysis of the effects of PF in a subpopulation of 25 subjects with urinary free cortisol (UFC) levels less than 50 ng/ml at baseline, on psychological distress based on the percentage of change of the perceived stress scale (PSs), the HADs and the HSCL-90 scores between baseline and follow-up. The data show that PF improves the same scores as in the general population (the HADs global score, the global severity index of the HSCL-90 and three of its sub-scores, i.e., somatization, depression and anger-hostility), as well as the PSs score and three other subscores of the HSCL-90, i.e., “obsessive compulsive,” “anxiety” and “paranoidideation.” Moreover, in the HSCL-90, Beneficial psychological effects of a probiotic formulation (Lactobacillus helveticus R0052 and Bifidobacterium longum R0175) in healthy human volunteers", "title": "" }, { "docid": "36b6c222587948357c275155b085ae6e", "text": "Deep Neural Networks (DNNs) require very large amounts of computation, and many different algorithms have been proposed to implement their most expensive layers, each of which has a large number of variants with different trade-offs of parallelism, locality, memory footprint, and execution time. In addition, specific algorithms operate much more efficiently on specialized data layouts. \n We state the problem of optimal primitive selection in the presence of data layout transformations, and show that it is NP-hard by demonstrating an embedding in the Partitioned Boolean Quadratic Assignment problem (PBQP). We propose an analytic solution via a PBQP solver, and evaluate our approach experimentally by optimizing several popular DNNs using a library of more than 70 DNN primitives, on an embedded platform and a general purpose platform. We show experimentally that significant gains are possible versus the state of the art vendor libraries by using a principled analytic solution to the problem of primitive selection in the presence of data layout transformations.", "title": "" }, { "docid": "09c50033443696a183dcdb1e0fc93cf0", "text": "In this paper, we introduce a novel FPGA architecture with memristor-based reconfiguration (mrFPGA). The proposed architecture is based on the existing CMOS-compatible memristor fabrication process. The programmable interconnects of mrFPGA use only memristors and metal wires so that the interconnects can be fabricated over logic blocks, resulting in significant reduction of overall area and interconnect delay but without using a 3D die-stacking process. Using memristors to build up the interconnects can also provide capacitance shielding from unused routing paths and reduce interconnect delay further. Moreover we propose an improved architecture that allows adaptive buffer insertion in interconnects to achieve more speedup. Compared to the fixed buffer pattern in conventional FPGAs, the positions of inserted buffers in mrFPGA are optimized on demand. A complete CAD flow is provided for mrFPGA, with an advanced P&R tool named mrVPR that was developed for mrFPGA. The tool can deal with the novel routing structure of mrFPGA, the memristor shielding effect, and the algorithm for optimal buffer insertion. We evaluate the area, performance and power consumption of mrFPGA based on the 20 largest MCNC benchmark circuits. Results show that mrFPGA achieves 5.18x area savings, 2.28x speedup and 1.63x power savings. Further improvement is expected with combination of 3D technologies and mrFPGA.", "title": "" }, { "docid": "ead92535c188bebd2285358c83fc0a07", "text": "BACKGROUND\nIndigenous peoples of Australia, Canada, United States and New Zealand experience disproportionately high rates of suicide. As such, the methodological quality of evaluations of suicide prevention interventions targeting these Indigenous populations should be rigorously examined, in order to determine the extent to which they are effective for reducing rates of Indigenous suicide and suicidal behaviours. This systematic review aims to: 1) identify published evaluations of suicide prevention interventions targeting Indigenous peoples in Australia, Canada, United States and New Zealand; 2) critique their methodological quality; and 3) describe their main characteristics.\n\n\nMETHODS\nA systematic search of 17 electronic databases and 13 websites for the period 1981-2012 (inclusive) was undertaken. The reference lists of reviews of suicide prevention interventions were hand-searched for additional relevant studies not identified by the electronic and web search. The methodological quality of evaluations of suicide prevention interventions was assessed using a standardised assessment tool.\n\n\nRESULTS\nNine evaluations of suicide prevention interventions were identified: five targeting Native Americans; three targeting Aboriginal Australians; and one First Nation Canadians. The main intervention strategies employed included: Community Prevention, Gatekeeper Training, and Education. Only three of the nine evaluations measured changes in rates of suicide or suicidal behaviour, all of which reported significant improvements. The methodological quality of evaluations was variable. Particular problems included weak study designs, reliance on self-report measures, highly variable consent and follow-up rates, and the absence of economic or cost analyses.\n\n\nCONCLUSIONS\nThere is an urgent need for an increase in the number of evaluations of preventive interventions targeting reductions in Indigenous suicide using methodologically rigorous study designs across geographically and culturally diverse Indigenous populations. Combining and tailoring best evidence and culturally-specific individual strategies into one coherent suicide prevention program for delivery to whole Indigenous communities and/or population groups at high risk of suicide offers considerable promise.", "title": "" }, { "docid": "ba3522be00805402629b4fb4a2c21cc4", "text": "Successful electronic government requires the successful implementation of technology. This book lays out a framework for understanding a system of decision processes that have been shown to be associated with the successful use of technology. Peter Weill and Jeanne Ross are based at the Center for Information Systems Research at MIT’s Sloan School of Management, which has been doing research on the management of information technology since 1974. Understanding how to make decisions about information technology has been a primary focus of the Center for decades. Weill and Ross’ book is based on two primary studies and a number of related projects. The more recent study is a survey of 256 organizations from the Americas, Europe, and Asia Pacific that was led by Peter Weill between 2001 and 2003. This work also included 32 case studies. The second study is a set of 40 case studies developed by Jeanne Ross between 1999 and 2003 that focused on the relationship between information technology (IT) architecture and business strategy. This work identified governance issues associated with IT and organizational change efforts. Three other projects undertaken by Weill, Ross, and others between 1998 and 2001 also contributed to the material described in the book. Most of this work is available through the CISR Web site, http://mitsloan.mit.edu/cisr/rmain.php. Taken together, these studies represent a substantial body of work on which to base the development of a frameBOOK REVIEW", "title": "" }, { "docid": "f262c85e241e0c6dd6eb472841284345", "text": "BACKGROUND\nWe evaluated the feasibility and tolerability of triple- versus double-drug chemotherapy in elderly patients with oesophagogastric cancer.\n\n\nMETHODS\nPatients aged 65 years or older with locally advanced or metastatic oesophagogastric cancer were stratified and randomised to infusional 5-FU, leucovorin and oxaliplatin without (FLO) or with docetaxel 50 mg/m(2) (FLOT) every 2 weeks. The study is registered at ClinicalTrials.gov, identifier NCT00737373.\n\n\nFINDINGS\nOne hundred and forty three (FLO, 71; FLOT, 72) patients with a median age of 70 years were enrolled. The triple combination was associated with more treatment-related National Cancer Institute Common Toxicity Criteria (NCI-CTC) grade 3/4 adverse events (FLOT, 81.9%; FLO, 38.6%; P<.001) and more patients experiencing a ≥10-points deterioration of European Organization for Research and Treatment of Cancer Quality of Life (EORTC QoL) global health status scores (FLOT, 47.5%; FLO 20.5%; p=.011). The triple combination was associated with more alopecia (P<.001), neutropenia (P<.001), leukopenia (P<.001), diarrhoea (P=.006) and nausea (P=.029).). No differences were observed in treatment duration and discontinuation due to toxicity, cumulative doses or toxic deaths between arms. The triple combination improved response rates and progression-free survival in the locally advanced subgroup and in the subgroup of patients aged between 65 and 70 years but not in the metastatic group or in patients aged 70 years and older.\n\n\nINTERPRETATION\nThe triple-drug chemotherapy was feasible in elderly patients with oesophagogastric cancer. However, toxicity was significantly increased and QoL deteriorated in a relevant proportion of patients.\n\n\nFUNDING\nThe study was partially funded by Sanofi-Aventis.", "title": "" }, { "docid": "e7bdf6d9a718127b5b9a94fed8afc0a5", "text": "BACKGROUND\nUse of the Internet for health information continues to grow rapidly, but its impact on health care is unclear. Concerns include whether patients' access to large volumes of information will improve their health; whether the variable quality of the information will have a deleterious effect; the effect on health disparities; and whether the physician-patient relationship will be improved as patients become more equal partners, or be damaged if physicians have difficulty adjusting to a new role.\n\n\nMETHODS\nTelephone survey of nationally representative sample of the American public, with oversample of people in poor health.\n\n\nRESULTS\nOf the 3209 respondents, 31% had looked for health information on the Internet in the past 12 months, 16% had found health information relevant to themselves and 8% had taken information from the Internet to their physician. Looking for information on the Internet showed a strong digital divide; however, once information had been looked for, socioeconomic factors did not predict other outcomes. Most (71%) people who took information to the physician wanted the physician's opinion, rather than a specific intervention. The effect of taking information to the physician on the physician-patient relationship was likely to be positive as long as the physician had adequate communication skills, and did not appear challenged by the patient bringing in information.\n\n\nCONCLUSIONS\nFor health information on the Internet to achieve its potential as a force for equity and patient well-being, actions are required to overcome the digital divide; assist the public in developing searching and appraisal skills; and ensure physicians have adequate communication skills.", "title": "" }, { "docid": "ce384939966654196aabbb076326c779", "text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.", "title": "" }, { "docid": "0994181d27a8e5e851b1bb54ec00fd8e", "text": "Infrared (IR) light is invisible to humans, but cameras are optically sensitive to this type of light. In this paper, we show how attackers can use surveillance cameras and infrared light to establish bi-directional covert communication between the internal networks of organizations and remote attackers. We present two scenarios: exfiltration (leaking data out of the network) and infiltration (sending data into the network). Exfiltration. Surveillance and security cameras are equipped with IR LEDs, which are used for night vision. In the exfiltration scenario, malware within the organization access the surveillance cameras across the local network and controls the IR illumination. Sensitive data such as PIN codes, passwords, and encryption keys are then modulated, encoded, and transmitted over the IR signals. Infiltration. In an infiltration scenario, an attacker standing in a public area (e.g., in the street) uses IR LEDs to transmit hidden signals to the surveillance camera(s). Binary data such as command and control (C&C) and beacon messages are encoded on top of the IR signals. The exfiltration and infiltration can be combined to establish bidirectional, 'air-gap' communication between the compromised network and the attacker. We discuss related work and provide scientific background about this optical channel. We implement a malware prototype and present data modulation schemas and a basic transmission protocol. Our evaluation of the covert channel shows that data can be covertly exfiltrated from an organization at a rate of 20 bit/sec per surveillance camera to a distance of tens of meters away. Data can be covertly infiltrated into an organization at a rate of over 100 bit/sec per surveillance camera from a distance of hundreds of meters to kilometers away.", "title": "" }, { "docid": "4ee6894fade929db82af9cb62fecc0f9", "text": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.", "title": "" }, { "docid": "dfa611e19a3827c66ea863041a3ef1e2", "text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.", "title": "" }, { "docid": "1b2144bca7146dcb8f99990159be47f6", "text": "We propose an object detection system that depends on position-sensitive grid feature maps. State-of-the-art object detection networks rely on convolutional neural networks pre-trained on a large auxiliary data set (e.g., ILSVRC 2012) designed for an image-level classification task. The image-level classification task favors translation invariance, while the object detection task needs localization representations that are translation variant to an extent. To address this dilemma, we construct position-sensitive convolutional layers, called grid convolutional layers that activate the object’s specific locations in the feature maps in the form of grids. With end-to-end training, the region of interesting grid pooling layer shepherds the last set of convolutional layers to learn specialized grid feature maps. Experiments on the PASCAL VOC 2007 data set show that our method outperforms the strong baselines faster region-based convolutional neural network counterpart and region-based fully convolutional networks by a large margin. Our method applied to ResNet-50 improves the mean average precision from 74.8%/74.2% to 79.4% without any other tricks. In addition, our approach achieves similar results on different networks (ResNet-101) and data sets (PASCAL VOC 2012 and MS COCO).", "title": "" }, { "docid": "60c8a335245e28f2a9ac24edd73eee5a", "text": "Papulopustular rosacea (PPR) is a common facial skin disease, characterized by erythema, telangiectasia, papules and pustules. Its physiopathology is still being discussed, but recently several molecular features of its inflammatory process have been identified: an overproduction of Toll-Like receptors 2, of a serine protease, and of abnormal forms of cathelicidin. The two factors which stimulate the Toll-like receptors to induce cathelicidin expression are skin infection and cutaneous barrier disruption: these two conditions are, at least theoretically, fulfilled by Demodex, which is present in high density in PPR and creates epithelial breaches by eating cells. So, the major pathogenic mechanisms of Demodex and its role in PPR are reviewed here in the context of these recent discoveries. In this review, the inflammatory process of PPR appears to be a consequence of the proliferation of Demodex, and strongly supports the hypothesis that: (1) in the first stage a specific (innate or acquired) immune defect against Demodex allows the proliferation of the mite; (2) in the second stage, probably when some mites penetrate into the dermis, the immune system is suddenly stimulated and gives rise to an exaggerated immune response against the Demodex, resulting in the papules and the pustules of the rosacea. In this context, it would be very interesting to study the immune molecular features of this first stage, named \"pityriasis folliculorum\", where the Demodex proliferate profusely with no, or a low immune reaction from the host: this entity appears to be a missing link in the understanding of rosacea.", "title": "" }, { "docid": "f37623a4f7a1b7b328883ab016e1b285", "text": "Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder–decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-ofspeech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available1, as are sample files and configurations2.", "title": "" }, { "docid": "b89f2c70e3c9e2258c2cdf3f9b2bfb1b", "text": "One-size-fits-all protocols are hard to achieve in Byzantine fault tolerance (BFT). As an alternative, BFT users, e.g., enterprises, need an easy and efficient method to choose the most convenient protocol that matches their preferences best. The various BFT protocols that have been proposed so far differ significantly in their characteristics and performance which makes choosing the ‘preferred’ protocol hard. In addition, if the state of the deployed system is too fluctuating, then perhaps using multiple protocols at once is needed; this requires a dynamic selection mechanism to move from one protocol to another. In this paper, we present the first BFT selection model and algorithm that can be used to choose the most convenient protocol according to user preferences. The selection algorithm applies some mathematical formulas to make the selection process easy and automatic. The algorithm operates in three modes: Static, Dynamic, and Heuristic. The Static mode addresses the cases where a single protocol is needed; the Dynamic mode assumes that the system conditions are quite fluctuating and thus requires runtime decisions, and the Heuristic mode is similar to the Dynamic mode but it uses additional heuristics to improve user choices. We give some examples to describe how selection occurs. We show that our approach is automated, easy, and yields reasonable results that match reality. To the best of our knowledge, this is the first work that addresses selection in BFT.", "title": "" }, { "docid": "39796ec6b42521ee4e45cc4ed851133c", "text": "Scavenging the idling computation resources at the enormous number of mobile devices, ranging from small IoT devices to powerful laptop computers, can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, which is called co-computing and the theme of this paper. We consider a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process based on a predicted CPU-idling profile and the objective of minimizing the user’s energy consumption. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The derived solution for the optimal offloading control has an interesting graphical interpretation as follows. In the plane of user’s co-computable bits (by offloading) versus time, a so-called offloading feasibility tunnel can be constructed that constrains the range of offloaded bits at any time instant. The existence of the tunnel arises from the helper’s CPU-idling profile and buffer size. Given the tunnel, the optimal offloading is shown to be achieved by the well-known “string-pulling” strategy, graphically referring to pulling a string across the tunnel. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user. The approach is modified by defining a new offloading feasibility tunnel that accounts for bursty data arrivals. Index Terms Mobile cooperative computing, energy-efficient transmission, D2D communication, computation offloading, mobile-edge computing, fog computing. C. You and K. Huang are with the Dept. of EEE at The University of Hong Kong, Hong Kong (Email: [email protected], [email protected]). ar X iv :1 70 4. 04 59 5v 3 [ cs .I T ] 2 5 A pr 2 01 7", "title": "" }, { "docid": "e8eab2f5481f10201bc82b7a606c1540", "text": "This survey covers the historical development and current state of the art in image understanding for iris biometrics. Most research publications can be categorized as making their primary contribution to one of the four major modules in iris biometrics: image acquisition, iris segmentation, texture analysis and matching of texture representations. Other important research includes experimental evaluations, image databases, applications and systems, and medical conditions that may affect the iris. We also suggest a short list of recommended readings for someone new to the field to quickly grasp the big picture of iris biometrics.", "title": "" }, { "docid": "ff0837ae319f4a40fdd58b91947447d7", "text": "Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.", "title": "" } ]
scidocsrr
5668ae6929813bd46500197605a6f1f2
Trust and Reputation Systems
[ { "docid": "f3a044835e9cbd0c13218ab0f9c06dd1", "text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.", "title": "" }, { "docid": "0a97c254e5218637235a7e23597f572b", "text": "We investigate the design of a reputation system for decentralized unstructured P2P networks like Gnutella. Having reliable reputation information about peers can form the basis of an incentive system and can guide peers in their decision making (e.g., who to download a file from). The reputation system uses objective criteria to track each peer's contribution in the system and allows peers to store their reputations locally. Reputation are computed using either of the two schemes, debit-credit reputation computation (DCRC) and credit-only reputation computation (CORC). Using a reputation computation agent (RCA), we design a public key based mechanism that periodically updates the peer reputations in a secure, light-weight, and partially distributed manner. We evaluate using simulations the performance tradeoffs inherent in the design of our system.", "title": "" } ]
[ { "docid": "dd14f9eb9a9e0e4e0d24527cf80d04f4", "text": "The growing popularity of microblogging websites has transformed these into rich resources for sentiment mining. Even though opinion mining has more than a decade of research to boost about, it is mostly confined to the exploration of formal text patterns like online reviews, news articles etc. Exploration of the challenges offered by informal and crisp microblogging have taken roots but there is scope for a large way ahead. The proposed work aims at developing a hybrid model for sentiment classification that explores the tweet specific features and uses domain independent and domain specific lexicons to offer a domain oriented approach and hence analyze and extract the consumer sentiment towards popular smart phone brands over the past few years. The experiments have proved that the results improve by around 2 points on an average over the unigram baseline.", "title": "" }, { "docid": "305cfc6824ec7ac30a08ade2fff66c13", "text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.", "title": "" }, { "docid": "1cd77d97f27b45d903ffcecda02795a5", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "0dd78cb46f6d2ddc475fd887a0dc687c", "text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.", "title": "" }, { "docid": "ee0ba4a70bfa4f53d33a31b2d9063e89", "text": "Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation", "title": "" }, { "docid": "3d0c8e3539dd8f5120a404836020133d", "text": "Regenerative braking system is the own system of electric and hybrid electric vehicle. The system can restore the kinetic energy and potential energy, used during start and accelerating, into battery through electrical machine. The total brake force is composed of friction brake force on front axel, friction brake force on rear axel and regenerative brake force when a vehicle equipped with regenerative braking system brakes. A control strategy, parallel regenerative brake strategy, was proposed to resolve the distribution of the three forces. The parallel regenerative brake strategy was optimized on Saturn SL1 and then simulated under urban 15 drive cycle. As a result, through optimization the parallel brake strategy is not only safe enough but also can restore the largest amount of the brake energy.", "title": "" }, { "docid": "5b617701a4f2fa324ca7e3e7922ce1c4", "text": "Open circuit voltage of a silicon solar cell is around 0.6V. A solar module is constructed by connecting a number of cells in series to get a practically usable voltage. Partial shading of a Solar Photovoltaic Module (SPM) is one of the main causes of overheating of shaded cells and reduced energy yield of the module. The present work is a study of harmful effects of partial shading on the performance of a PV module. A PSPICE simulation model that represents 36 cells PV module under partial shaded conditions has been used to test several shading profiles and results are presented.", "title": "" }, { "docid": "7e884438ee8459a441cbe1500f1bac88", "text": "We consider the problem of autonomously flying Miniature Aerial Vehicles (MAVs) in indoor environments such as home and office buildings. The primary long range sensor in these MAVs is a miniature camera. While previous approaches first try to build a 3D model in order to do planning and control, our method neither attempts to build nor requires a 3D model. Instead, our method first classifies the type of indoor environment the MAV is in, and then uses vision algorithms based on perspective cues to estimate the desired direction to fly. We test our method on two MAV platforms: a co-axial miniature helicopter and a toy quadrotor. Our experiments show that our vision algorithms are quite reliable, and they enable our MAVs to fly in a variety of corridors and staircases.", "title": "" }, { "docid": "cab56ff19b08af38eb1812a4f3e32d04", "text": "To ensure security, it is important to build-in security in both the planning and the design phases and adapt a security architecture which makes sure that regular and security related tasks, are deployed correctly. Security requirements must be linked to the business goals. We identified four domains that affect security at an organization namely, organization governance, organizational culture, the architecture of the systems, and service management. In order to identify and explore the strength and weaknesses of particular organization’s security, a wide range model has been developed. This model is proposed as an information security maturity model (ISMM) and it is intended as a tool to evaluate the ability of organizations to meet the objectives of security.", "title": "" }, { "docid": "9a136517edbfce2a7c6b302da9e6c5b7", "text": "This paper presents our approach to semantic relatedness and textual entailment subtasks organized as task 1 in SemEval 2014. Specifically, we address two questions: (1) Can we solve these two subtasks together? (2) Are features proposed for textual entailment task still effective for semantic relatedness task? To address them, we extracted seven types of features including text difference measures proposed in entailment judgement subtask, as well as common text similarity measures used in both subtasks. Then we exploited the same feature set to solve the both subtasks by considering them as a regression and a classification task respectively and performed a study of influence of different features. We achieved the first and the second rank for relatedness and entailment task respectively.", "title": "" }, { "docid": "7a37df81ad70697549e6da33384b4f19", "text": "Water scarcity is now one of the major global crises, which has affected many aspects of human health, industrial development and ecosystem stability. To overcome this issue, water desalination has been employed. It is a process to remove salt and other minerals from saline water, and it covers a variety of approaches from traditional distillation to the well-established reverse osmosis. Although current water desalination methods can effectively provide fresh water, they are becoming increasingly controversial due to their adverse environmental impacts including high energy intensity and highly concentrated brine waste. For millions of years, microorganisms, the masters of adaptation, have survived on Earth without the excessive use of energy and resources or compromising their ambient environment. This has encouraged scientists to study the possibility of using biological processes for seawater desalination and the field has been exponentially growing ever since. Here, the term biodesalination is offered to cover all of the techniques which have their roots in biology for producing fresh water from saline solution. In addition to reviewing and categorizing biodesalination processes for the first time, this review also reveals unexplored research areas in biodesalination having potential to be used in water treatment.", "title": "" }, { "docid": "26b1c00522009440c0481453e0f6331c", "text": "Software organizations that develop their software products using the agile software processes such as Extreme Programming (XP) face a number of challenges in their effort to demonstrate that their process activities conform to ISO 9001 requirements, a major one being product traceability: software organizations must provide evidence of ISO 9001 conformity, and they need to develop their own procedures, tools, and methodologies to do so. This paper proposes an auditing model for ISO 9001 traceability requirements that is applicable in agile (XP) environments. The design of our model is based on evaluation theory, and includes the use of several auditing “yardsticks” derived from the principles of engineering design, the SWEBOK Guide, and the CMMI-DEV guidelines for requirement management and traceability for each yardstick. Finally, five approaches for agile-XP traceability approaches are audited based on the proposed audit model.", "title": "" }, { "docid": "925efe54f311b78ecd83419c1ad0f783", "text": "Bayesian neural networks (BNNs) allow us to reason about uncertainty in a principled way. Stochastic Gradient Langevin Dynamics (SGLD) enables efficient BNN learning by drawing samples from the BNN posterior using mini-batches. However, SGLD and its extensions require storage of many copies of the model parameters, a potentially prohibitive cost, especially for large neural networks. We propose a framework, Adversarial Posterior Distillation, to distill the SGLD samples using a Generative Adversarial Network (GAN). At test-time, samples are generated by the GAN. We show that this distillation framework incurs no loss in performance on recent BNN applications including anomaly detection, active learning, and defense against adversarial attacks. By construction, our framework distills not only the Bayesian predictive distribution, but the posterior itself. This allows one to compute quantities such as the approximate model variance, which is useful in downstream tasks. To our knowledge, these are the first results applying MCMC-based BNNs to the aforementioned applications.", "title": "" }, { "docid": "bd7f3decfe769db61f0577a60e39a26f", "text": "Automated food and drink recognition methods connect to cloud-based lookup databases (e.g., food item barcodes, previously identified food images, or previously classified NIR (Near Infrared) spectra of food and drink items databases) to match and identify a scanned food or drink item, and report the results back to the user. However, these methods remain of limited value if we cannot further reason with the identified food and drink items, ingredients and quantities/portion sizes in a proposed meal in various contexts; i.e., understand from a semantic perspective their types, properties, and interrelationships in the context of a given user’s health condition and preferences. In this paper, we review a number of “food ontologies”, such as the Food Products Ontology/FOODpedia (by Kolchin and Zamula), Open Food Facts (by Gigandet et al.), FoodWiki (Ontology-driven Mobile Safe Food Consumption System by Celik), FOODS-Diabetes Edition (A Food-Oriented Ontology-Driven System by Snae Namahoot and Bruckner), and AGROVOC multilingual agricultural thesaurus (by the UN Food and Agriculture Organization—FAO). These food ontologies, with appropriate modifications (or as a basis, to be added to and further OPEN ACCESS Future Internet 2015, 7 373 expanded) and together with other relevant non-food ontologies (e.g., about diet-sensitive disease conditions), can supplement the aforementioned lookup databases to enable progression from the mere automated identification of food and drinks in our meals to a more useful application whereby we can automatically reason with the identified food and drink items and their details (quantities and ingredients/bromatological composition) in order to better assist users in making the correct, healthy food and drink choices for their particular health condition, age, body weight/BMI (Body Mass Index), lifestyle and preferences, etc.", "title": "" }, { "docid": "f6553bf60969c422a07e1260a35b10c9", "text": "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.", "title": "" }, { "docid": "4d24a09dcbac1cc33a88bbabc89102d8", "text": "Streaming data analysis in real time is becoming the fastest and most efficient way to obtain useful knowledge from what is happening now, allowing organizations to react quickly when problems appear or to detect new trends helping to improve their performance. Evolving data streams are contributing to the growth of data created over the last few years. We are creating the same quantity of data every two days, as we created from the dawn of time up until 2003. Evolving data streams methods are becoming a low-cost, green methodology for real time online prediction and analysis. We discuss the current and future trends of mining evolving data streams, and the challenges that the field will have to overcome during the next years.", "title": "" }, { "docid": "011fd6ee57ffd223c0e1a29b3a7ecad1", "text": "A substrate-integrated waveguide (SIW) H-plane sectoral horn antenna, with significantly improved bandwidth, is presented. A tapered ridge, consists of a simple arrangement of vias on the side flared wall within the multilayer substrate, is introduced to enlarge the operational bandwidth. A simple feed configuration is suggested to provide the propagating wave for the antenna structure. The proposed antenna is simulated by two well-known full wave packages, the Ansoft HFSS and the CST microwave studio. Close agreement between both simulation results is obtained. The designed antenna shows good radiation characteristics and low VSWR, lower than 2.5, for the whole frequency range of 18– 40 GHz.", "title": "" }, { "docid": "c91fe61e7ef90867377940644b566d93", "text": "The adoption of Learning Management Systems to create virtual learning communities is a unstructured form of allowing collaboration that is rapidly growing. Compared to other systems that structure interactions, these environments provide data of the interaction performed at a very low level. For assessment purposes, this fact poses some difficulties to derive higher lever indicators of collaboration. In this paper we propose to shape the analysis problem as a data mining task. We suggest that the typical data mining cycle bears many resemblances with proposed models for collaboration management. We present some preliminary experiments using clustering to discover patterns reflecting user behaviors. Results are very encouraging and suggest several research directions.", "title": "" }, { "docid": "41c5a41b0bebcdb5b744d0ac9d0ed0f6", "text": "For research to progress most effectively, we first should establish common ground regarding just what is the problem that imbalanced data sets present to machine learning systems. Why and when should imbalanced data sets be problematic? When is the problem simply an artifact of easily rectified design choices? I will try to pick the low-hanging fruit and share them with the rest of the workshop participants. Specifically, I would like to discuss what the problem is not. I hope this will lead to a profitable discussion of what the problem indeed is, and how it might be addressed most effectively. A common notion in machine learning causes the most basic problem, and indeed often has stymied both research-oriented and practical attempts to learn from imbalanced data sets. Fortunately the problem is straightforward to fix. The stumbling block is the notion that an inductive learner produces a black box that acts as a categorical (e.g., binary) labeling function. Of course, many of our learning algorithms in fact do produce such classifiers, which gets us into trouble when faced with imbalanced class distributions. The assumptions built into (most of) these algorithms are: 1. that maximizing accuracy is the goal, and 2. that, in use, the classifier will operate on data drawn from the same distribution as the training data. The result of these two assumptions is that machine learning on unbalanced data sets produces unsatisfactory classifiers. The reason why should be clear: if 99% of the data are from one class, for most realistic problems a learning algorithm will be hard pressed to do better than the 99% accuracy achievable by the trivial classifier that labels everything with the majority class. Based on the underlying assumptions, this is the intelligent thing to do. It is more striking when one of our algorithms, operating under these assumptions, behaves otherwise. This apparent problem nothwithstanding, it would be premature to conclude that there is a fundamental difficulty with learning from imbalanced data sets. We first must probe deeper and ask whether the algorithms are robust to the weakening of the assumptions that cause the problem. When designing algorithms, some assumptions are fundamental. Changing them would entail redesigning the algorithm completely. Other assumptions are made for convenience, and can be changed with little consequence. So, which is the nature of the assumptions (1 & 2) in question? Investigating this is (tacitly perhaps) one of the main …", "title": "" }, { "docid": "d3d5f135cc2a09bf0dfc1ef88c6089b5", "text": "In this paper, we present the Expert Hub System, which was designed to help governmental structures find the best experts in different areas of expertise for better reviewing of the incoming grant proposals. In order to define the areas of expertise with topic modeling and clustering, and then to relate experts to corresponding areas of expertise and rank them according to their proficiency in certain areas of expertise, the Expert Hub approach uses the data from the Directorate of Science and Technology Programmes. Furthermore, the paper discusses the use of Big Data and Machine Learning in the Russian government", "title": "" } ]
scidocsrr
5bf884c4a8bf5ebdbcac8cac94b0a2f5
Joint Subcarrier and CPU Time Allocation for Mobile Edge Computing
[ { "docid": "f58a1b5f4c914a0ab3fcf3e2a8820e45", "text": "This paper provides a theoretical framework of energy-optimal mobile cloud computing under stochastic wireless channel. Our objective is to conserve energy for the mobile device, by optimally executing mobile applications in the mobile device (i.e., mobile execution) or offloading to the cloud (i.e., cloud execution). One can, in the former case sequentially reconfigure the CPU frequency; or in the latter case dynamically vary the data transmission rate to the cloud, in response to the stochastic channel condition. We formulate both scheduling problems as constrained optimization problems, and obtain closed-form solutions for optimal scheduling policies. Furthermore, for the energy-optimal execution strategy of applications with small output data (e.g., CloudAV), we derive a threshold policy, which states that the data consumption rate, defined as the ratio between the data size (L) and the delay constraint (T), is compared to a threshold which depends on both the energy consumption model and the wireless channel model. Finally, numerical results suggest that a significant amount of energy can be saved for the mobile device by optimally offloading mobile applications to the cloud in some cases. Our theoretical framework and numerical investigations will shed lights on system implementation of mobile cloud computing under stochastic wireless channel.", "title": "" }, { "docid": "0cbd3587fe466a13847e94e29bb11524", "text": "The cloud heralds a new era of computing where application services are provided through the Internet. Cloud computing can enhance the computing capability of mobile systems, but is it the ultimate solution for extending such systems' battery lifetimes?", "title": "" }, { "docid": "335a330d7c02f13c0f50823461f4e86f", "text": "Migrating computational intensive tasks from mobile devices to more resourceful cloud servers is a promising technique to increase the computational capacity of mobile devices while saving their battery energy. In this paper, we consider an MIMO multicell system where multiple mobile users (MUs) ask for computation offloading to a common cloud server. We formulate the offloading problem as the joint optimization of the radio resources-the transmit precoding matrices of the MUs-and the computational resources-the CPU cycles/second assigned by the cloud to each MU-in order to minimize the overall users' energy consumption, while meeting latency constraints. The resulting optimization problem is nonconvex (in the objective function and constraints). Nevertheless, in the single-user case, we are able to compute the global optimal solution in closed form. In the more challenging multiuser scenario, we propose an iterative algorithm, based on a novel successive convex approximation technique, converging to a local optimal solution of the original nonconvex problem. We then show that the proposed algorithmic framework naturally leads to a distributed and parallel implementation across the radio access points, requiring only a limited coordination/signaling with the cloud. Numerical results show that the proposed schemes outperform disjoint optimization algorithms.", "title": "" } ]
[ { "docid": "0d6e5e20d6a909a6450671feeb4ac261", "text": "Rita bakalu, a new species, is described from the Godavari river system in peninsular India. With this finding, the genus Rita is enlarged to include seven species, comprising six species found in South Asia, R. rita, R. macracanthus, R. gogra, R. chrysea, R. kuturnee, R. bakalu, and one species R. sacerdotum from Southeast Asia. R. bakalu is distinguished from its congeners by a combination of the following characters: eye diameter 28–39% HL and 20–22 caudal fin rays; teeth in upper jaw uniformly villiform in two patches, interrupted at the midline; palatal teeth well-developed villiform, in two distinct patches located at the edge of the palate. The mtDNA cytochrome C oxidase I sequence analysis confirmed that the R. bakalu is distinct from the other congeners of Rita. Superficially, R. bakalu resembles R. kuturnee, reported from the Godavari and Krishna river systems; however, the two species are discriminated due to differences in the structure of their teeth patches on upper jaw and palate, anal fin originating before the origin of adipose fin, comparatively larger eye diameter, longer mandibular barbels, and vertebral count. The results conclude that the river Godavari harbors a different species of Rita, R. bakalu which is new to science.", "title": "" }, { "docid": "5c90cd6c4322c30efb90589b1a65192e", "text": "The sure thing principle and the law of total probability are basic laws in classic probability theory. A disjunction fallacy leads to the violation of these two classical laws. In this paper, an Evidential Markov (EM) decision making model based on Dempster-Shafer (D-S) evidence theory and Markov modelling is proposed to address this issue and model the real human decision-making process. In an evidential framework, the states are extended by introducing an uncertain state which represents the hesitance of a decision maker. The classical Markov model can not produce the disjunction effect, which assumes that a decision has to be certain at one time. However, the state is allowed to be uncertain in the EM model before the final decision is made. An extra uncertainty degree parameter is defined by a belief entropy, named Deng entropy, to assignment the basic probability assignment of the uncertain state, which is the key to predict the disjunction effect. A classical categorization decision-making experiment is used to illustrate the effectiveness and validity of EM model. The disjunction effect can be well predicted ∗Corresponding author at Wen Jiang: School of Electronics and Information, Northwestern Polytechnical University, Xi’an, Shaanxi 710072, China. Tel: (86-29)88431267. E-mail address: [email protected], [email protected] Preprint submitted to Elsevier May 19, 2017 and the free parameters are less compared with the existing models.", "title": "" }, { "docid": "b09d23c24625dc17e351d79ce88405b8", "text": "-This paper presents an overview of feature extraction methods for off-line recognition of segmented (isolated) characters. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance in character recognition systems. Different feature extraction methods are designed for different representations 6f the characters, such as solid binary characters, character contours, skeletons (thinned characters) or gray-level subimages of each individual character. The feature extraction methods are discussed in terms of invariance properties, reconstructability and expected distortions and variability of the characters. The problem of choosing the appropriate feature extraction method for a given application is also discussed. When a few promising feature extraction methods have been identified, they need to be evaluated experimentally to find the best method for the given application. Feature extraction Optical character recognition Character representation Invariance Reconstructability I. I N T R O D U C T I O N Optical character recognition (OCR) is one of the most successful applications of automatic pattern recognition. Since the mid 1950s, OCR has been a very active field for research and development, ca) Today, reasonably good OCR packages can be bought for as little as $100. However, these are only able to recognize high quality printed text documents or neatly written handprinted text. The current research in OCR is now addressing documents that are not well handled by the available systems, including severely degraded, omnifont machine-printed text and (unconstrained) handwritten text. Also, efforts are being made to achieve lower substitution error rates and reject rates even on good quality machine-printed text, since an experienced human typist still has a much lower error rate, albeit at a slower speed. Selection of a feature extraction method is probably the single most important factor in achieving high recognition performance. Our own interest in character recognition is to recognize hand-printed digits in hydrographic maps (Fig. 1), but we have tried not to emphasize this particular application in the paper. Given the large number of feature extraction methods reported in the literature, a newcomer to the field is faced with the following question: which feature ext Author to whom correspondence should be addressed. This work was done while OD. Trier was visiting Michigan State University. traction method is the best for a given application? This question led us to characterize the available feature extraction methods, so that the most promising methods could be sorted out. An experimental evaluation of these few promising methods must still be performed to select the best method for a specific application. In this process, one might find that a specific feature extraction method needs to be further developed. A full performance evaluation of each method in terms of classification accuracy and speed is not within the scope of this review paper. In order to study performance issues, we will have to implement all the feature extraction methods, which is an enormous task. In addition, the performance also depends on the type of classifier used. Different feature types may need different types of classifiers. Also, the classification results reported in the literature are not comparable because they are based on different data sets. Given the vast number of papers published on OCR every year, it is impossible to include all the available feature extraction methods in this survey. Instead, we have tried to make a representative selection to illustrate the different principles that can be used. Two-dimensional (2-D) object classification has several applications in addition to character recognition. These include airplane recognition, 12) recognition of mechanical parts and tools, 13l and tissue classification in medical imaging34) Several of the feature extraction techniques described in this paper for OCR have also been found to be useful in such applications.", "title": "" }, { "docid": "aace50c8446403a9f72b24bce1e88c30", "text": "This paper presents a model-driven approach to the development of web applications based on the Ubiquitous Web Application (UWA) design framework, the Model-View-Controller (MVC) architectural pattern and the JavaServer Faces technology. The approach combines a complete and robust methodology for the user-centered conceptual design of web applications with the MVC metaphor, which improves separation of business logic and data presentation. The proposed approach, by carrying the advantages of ModelDriven Development (MDD) and user-centered design, produces Web applications which are of high quality from the user's point of view and easier to maintain and evolve.", "title": "" }, { "docid": "65914e9526e1e765d11a9faf8f530f23", "text": "Named Entity Recognition (NER) is a tough task in Chinese social media due to a large portion of informal writings. Existing research uses only limited in-domain annotated data and achieves low performance. In this paper, we utilize both limited in-domain data and enough out-of-domain data using a domain adaptation method. We propose a multichannel LSTM-CRF model that employs different channels to capture general patterns, in-domain patterns and out-of-domain patterns in Chinese social media. The extensive experiments show that our model yields 9.8% improvement over previous state-of-the-art methods. We further find that a shared embedding layer is important and randomly initialized embeddings are better than the pretrained ones.", "title": "" }, { "docid": "7574373f4082ed5245cb1107d1917192", "text": "Heat exchanger system is widely used in chemical plants because it can sustain wide range of temperature and pressure. The main purpose of a heat exchanger system is to transfer heat from a hot fluid to a cooler fluid, so temperature control of outlet fluid is of prime importance. To control the temperature of outlet fluid of the heat exchanger system a conventional PID controller can be used. Due to inherent disadvantages of conventional control techniques, model based control technique is employed and an internal model based PID controller is developed to control the temperature of outlet fluid of the heat exchanger system. The designed controller regulates the temperature of the outgoing fluid to a desired set point in the shortest possible time irrespective of load and process disturbances, equipment saturation and nonlinearity. The developed internal model based PID controller has demonstrated 84% improvement in the overshoot and 44.6% improvement in settling time as compared to the", "title": "" }, { "docid": "505a9b6139e8cbf759652dc81f989de9", "text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern", "title": "" }, { "docid": "9697137a72f41fb4fb841e4e1b41be62", "text": "Cast shadows are an informative cue to the shape of objects. They are particularly valuable for discovering object’s concavities which are not available from other cues such as occluding boundaries. We propose a new method for recovering shape from shadows which we call shadow carving. Given a conservative estimate of the volume occupied by an object, it is possible to identify and carve away regions of this volume that are inconsistent with the observed pattern of shadows. We prove a theorem that guarantees that when these regions are carved away from the shape, the shape still remains conservative. Shadow carving overcomes limitations of previous studies on shape from shadows because it is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs. We propose a reconstruction system to recover shape from silhouettes and shadow carving. The silhouettes are used to reconstruct the initial conservative estimate of the object’s shape and shadow carving is used to carve out the concavities. We have simulated our reconstruction system with a commercial rendering package to explore the design parameters and assess the accuracy of the reconstruction. We have also implemented our reconstruction scheme in a table-top system and present the results of scanning of several objects.", "title": "" }, { "docid": "18e019622188ab6ddb2beca69d51e1c9", "text": "The rhesus macaque (Macaca mulatta) is the most utilized primate model in the biomedical and psychological sciences. Expressive behavior is of interest to scientists studying these animals, both as a direct variable (modeling neuropsychiatric disease, where expressivity is a primary deficit), as an indirect measure of health and welfare, and also in order to understand the evolution of communication. Here, intramuscular electrical stimulation of facial muscles was conducted in the rhesus macaque in order to document the relative contribution of each muscle to the range of facial movements and to compare the expressive function of homologous muscles in humans, chimpanzees and macaques. Despite published accounts that monkeys possess less differentiated and less complex facial musculature, the majority of muscles previously identified in humans and chimpanzees were stimulated successfully in the rhesus macaque and caused similar appearance changes. These observations suggest that the facial muscular apparatus of the monkey has extensive homology to the human face. The muscles of the human face, therefore, do not represent a significant evolutionary departure from those of a monkey species. Thus, facial expressions can be compared between humans and rhesus macaques at the level of the facial musculature, facilitating the systematic investigation of comparative facial communication.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" }, { "docid": "a1c534ca8925ccfed04b21a92263b9d7", "text": "In the last few decades, Structure from Motion (SfM) and visual Simultaneous Localization and Mapping (visual SLAM) techniques have gained significant interest from both the computer vision and robotic communities. Many variants of these techniques have started to make an impact in a wide range of applications, including robot navigation and augmented reality. However, despite some remarkable results in these areas, most SfM and visual SLAM techniques operate based on the assumption that the observed environment is static. However, when faced with moving objects, overall system accuracy can be jeopardized. In this article, we present for the first time a survey of visual SLAM and SfM techniques that are targeted toward operation in dynamic environments. We identify three main problems: how to perform reconstruction (robust visual SLAM), how to segment and track dynamic objects, and how to achieve joint motion segmentation and reconstruction. Based on this categorization, we provide a comprehensive taxonomy of existing approaches. Finally, the advantages and disadvantages of each solution class are critically discussed from the perspective of practicality and robustness.", "title": "" }, { "docid": "e60622f175cb091537f3a1a2cb2550ae", "text": "Non-differentiable and constrained optimization play a key role in machine learning, signal and image processing, communications, and beyond. For high-dimensional minimization problems involving large datasets or many unknowns, the forward-backward splitting method (also known as the proximal gradient method) provides a simple, yet practical solver. Despite its apparent simplicity, the performance of the forward-backward splitting method is highly sensitive to implementation details. This article provides an introductory review of forward-backward splitting with a special emphasis on practical implementation aspects. In particular, issues like stepsize selection, acceleration, stopping conditions, and initialization are considered. Numerical experiments are used to compare the effectiveness of different approaches. Many variations of forward-backward splitting are implemented in a new solver called FASTA (short for Fast Adaptive Shrinkage/Thresholding Algorithm). FASTA provides a simple interface for applying forward-backward splitting to a broad range of problems appearing in sparse recovery, logistic regression, multiple measurement vector (MMV) problems, democratic representations, 1-bit matrix completion, total-variation (TV) denoising, phase retrieval, as well as non-negative matrix factorization.", "title": "" }, { "docid": "eb86266b6f2a6c5bddece58d2ea6121a", "text": "Adoptive immunotherapy, or the infusion of lymphocytes, is a promising approach for the treatment of cancer and certain chronic viral infections. The application of the principles of synthetic biology to enhance T cell function has resulted in substantial increases in clinical efficacy. The primary challenge to the field is to identify tumor-specific targets to avoid off-tumor, on-target toxicity. Given recent advances in efficacy in numerous pilot trials, the next steps in clinical development will require multicenter trials to establish adoptive immunotherapy as a mainstream technology.", "title": "" }, { "docid": "067ec456d76cce7978b3d2f0c67269ed", "text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.", "title": "" }, { "docid": "2f9d5235bac1d8b3a9c26cd00e843fb9", "text": "K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.", "title": "" }, { "docid": "1c17535a4f1edc36b698295136e9711a", "text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.", "title": "" }, { "docid": "3dfd3093b6abb798474dec6fb9cfca36", "text": "This paper proposes a new image representation for texture categorization, which is based on extension of local binary patterns (LBP). As we know LBP can achieve effective description ability with appearance invariance and adaptability of patch matching based methods. However, LBP only thresholds the differential values between neighborhood pixels and the focused one to 0 or 1, which is very sensitive to noise existing in the processed image. This study extends LBP to local ternary patterns (LTP), which considers the differential values between neighborhood pixels and the focused one as negative or positive stimulus if the absolute differential value is large; otherwise no stimulus (set as 0). With the ternary values of all neighbored pixels, we can achieve a pattern index for each local patch, and then extract the pattern histogram for image representation. Experiments on two texture datasets: Brodats32 and KTH TIPS2-a validate that the robust LTP can achieve much better performances than the conventional LBP and the state-of-the-art methods.", "title": "" }, { "docid": "824bcc0f9f4e71eb749a04f441891200", "text": "We characterize the singular values of the linear transformation associated with a convolution applied to a two-dimensional feature map with multiple channels. Our characterization enables efficient computation of the singular values of convolutional layers used in popular deep neural network architectures. It also leads to an algorithm for projecting a convolutional layer onto the set of layers obeying a bound on the operator norm of the layer. We show that this is an effective regularizer; periodically applying these projections during training improves the test error of a residual network on CIFAR-10 from 6.2% to 5.3%.", "title": "" }, { "docid": "815e0ad06fdc450aa9ba3f56ab19ab05", "text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.", "title": "" }, { "docid": "7677f90e0d949488958b27422bdffeb5", "text": "This vignette is a slightly modified version of Koenker (2008a). It was written in plain latex not Sweave, but all data and code for the examples described in the text are available from either the JSS website or from my webpages. Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and NelsonAalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.", "title": "" } ]
scidocsrr
7f360b9a8631c00f477628c509eb4820
Cloud IoT Based Greenhouse Monitoring System
[ { "docid": "1a101ae3faeaa775737799c2324ef603", "text": "in recent years, greenhouse technology in agriculture is to automation, information technology direction with the IOT (internet of things) technology rapid development and wide application. In the paper, control networks and information networks integration of IOT technology has been studied based on the actual situation of agricultural production. Remote monitoring system with internet and wireless communications combined is proposed. At the same time, taking into account the system, information management system is designed. The collected data by the system provided for agricultural research facilities.", "title": "" }, { "docid": "e6021af3cb62968b290a750ec5d8b6bd", "text": "This paper mainly focuses on the controlling of hom e appliances remotely and providing security when the user is away from the place. The system is SMS based and uses wireless technology to revolutionize the standards of living. This system provides ideal solution to the problems faced by home owners in daily life. The system is wireless t herefore more adaptable and cost-effective. The HACS system provides security against intrusion as well as automates various home appliances using SMS. The system uses GSM technology thu s providing ubiquitous access to the system for security and automated appliance control.", "title": "" } ]
[ { "docid": "9ad8a5b73430e4fe6b86d5fb8e2412b0", "text": "We apply coset codes to adaptive modulation in fading channels. Adaptive modulation is a powerful technique to improve the energy efficiency and increase the data rate over a fading channel. Coset codes are a natural choice to use with adaptive modulation since the channel coding and modulation designs are separable. Therefore, trellis and lattice codes designed for additive white Gaussian noise (AWGN) channels can be superimposed on adaptive modulation for fading channels, with the same approximate coding gains. We first describe the methodology for combining coset codes with a general class of adaptive modulation techniques. We then apply this methodology to a spectrally efficient adaptive M -ary quadrature amplitude modulation (MQAM) to obtain trellis-coded adaptive MQAM. We present analytical and simulation results for this design which show an effective coding gain of 3 dB relative to uncoded adaptive MQAM for a simple four-state trellis code, and an effective 3.6-dB coding gain for an eight-state trellis code. More complex trellis codes are shown to achieve higher gains. We also compare the performance of trellis-coded adaptive MQAM to that of coded modulation with built-in time diversity and fixed-rate modulation. The adaptive method exhibits a power savings of up to 20 dB.", "title": "" }, { "docid": "f15cb62cb81b71b063d503eb9f44d7c5", "text": "This study presents an improved krill herd (IKH) approach to solve global optimization problems. The main improvement pertains to the exchange of information between top krill during motion calculation process to generate better candidate solutions. Furthermore, the proposed IKH method uses a new Lévy flight distribution and elitism scheme to update the KH motion calculation. This novel meta-heuristic approach can accelerate the global convergence speed while preserving the robustness of the basic KH algorithm. Besides, the detailed implementation procedure for the IKH method is described. Several standard benchmark functions are used to verify the efficiency of IKH. Based on the results, the performance of IKH is superior to or highly competitive with the standard KH and other robust population-based optimization methods. & 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7f3686b783273c4df7c4fb41fe7ccefd", "text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e37c560150a94947117d7c796af73469", "text": "For many players in financial markets, the price impact of their trading activity represents a large proportion of their transaction costs. This paper proposes a novel machine learning method for predicting the price impact of order book events. Specifically, we introduce a prediction system based on performance weighted ensembles of random forests. The system's performance is benchmarked using ensembles of other popular regression algorithms including: liner regression, neural networks and support vector regression using depth-of-book data from the BATS Chi-X exchange. The results show that recency-weighted ensembles of random forests produce over 15% greater prediction accuracy on out-of-sample data, for 5 out of 6 timeframes studied, compared with all benchmarks.", "title": "" }, { "docid": "6f973565132ed9a535551ca7ec78086d", "text": "This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.", "title": "" }, { "docid": "599f4afe379a877e324547e09033465d", "text": "Large-scale graph analytics is a central tool in many fields, and exemplifies the size and complexity of Big Data applications. Recent distributed graph processing frameworks utilize the venerable Bulk Synchronous Parallel (BSP) model and promise scalability for large graph analytics. This has been made popular by Google's Pregel, which provides an architecture design for BSP graph processing. Public clouds offer democratized access to medium-sized compute infrastructure with the promise of rapid provisioning with no capital investment. Evaluating BSP graph frameworks on cloud platforms with their unique constraints is less explored. Here, we present optimizations and analyses for computationally complex graph analysis algorithms such as betweenness-centrality and all-pairs shortest paths on a native BSP framework we have developed for the Microsoft Azure Cloud, modeled on the Pregel graph processing model. We propose novel heuristics for scheduling graph vertex processing in swaths to maximize resource utilization on cloud VMs that lead to a 3.5x performance improvement. We explore the effects of graph partitioning in the context of BSP, and show that even a well partitioned graph may not lead to performance improvements due to BSP's barrier synchronization. We end with a discussion on leveraging cloud elasticity for dynamically scaling the number of BSP workers to achieve a better performance than a static deployment, and at a significantly lower cost.", "title": "" }, { "docid": "18e75ca50be98af1d5a6a2fd22b610d3", "text": "We propose a new type of saliency&#x2014;context-aware saliency&#x2014;which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting, we demonstrate that using our saliency prevents distortions in the important regions. In summarization, we show that our saliency helps to produce compact, appealing, and informative summaries.", "title": "" }, { "docid": "b9c2db4d1b90f68833581585596144a2", "text": "The Internet of things (IoT) is a next generation of Internet connected embedded ICT systems in a digital environment to seamlessly integrate supply chain and logistics processes. Integrating emerging IoT into the current ICT systems can be unique because of its intelligence, autonomous and pervasive applications. However, research on the IoT adoption in supply chain domain is scarce and acceptance of the IoT into the retail services in specific has been overly rhetoric. This study is drawn upon the organisational capability theory for developing an empirical model considering the effect of IoT capabilities on multiple dimensions of supply chain process integration, and in turn improves supply chain performance as well as organisational performance. Cross-sectional survey data from 227 Australian retail firms was analysed using structural equation modelling (SEM). The results indicate that IoT capability has a positive and significant effect on internal, customer-, and supplier-related process integration that in turn positively affects supply chain performance and organisational performance. Theoretically, the study contributes to a body of knowledge that integrates information systems research into supply chain integration by establishing an empirical evidence of how IoT-enabled process integration can enhance the performance at both supply chain and organisational level. Practically, the results inform the managers of the likely investment on IoT that can lead to chain’s performance outcome.", "title": "" }, { "docid": "d88059813c4064ec28c58a8ab23d3030", "text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.", "title": "" }, { "docid": "eead6bfbb549a809046536f7d4b8acbd", "text": "With the advent of numerous community forums, tasks associated with the same have gained importance in the recent past. With the influx of new questions every day on these forums, the issues of identifying methods to find answers to said questions, or even trying to detect duplicate questions, are of practical importance and are challenging in their own right. This paper aims at surveying some of the aforementioned issues, and methods proposed for tackling the same.", "title": "" }, { "docid": "03ff1bdb156c630add72357005a142f5", "text": "Recent advances in media generation techniques have made it easier for attackers to create forged images and videos. Stateof-the-art methods enable the real-time creation of a forged version of a single video obtained from a social network. Although numerous methods have been developed for detecting forged images and videos, they are generally targeted at certain domains and quickly become obsolete as new kinds of attacks appear. The method introduced in this paper uses a capsule network to detect various kinds of spoofs, from replay attacks using printed images or recorded videos to computergenerated videos using deep convolutional neural networks. It extends the application of capsule networks beyond their original intention to the solving of inverse graphics problems.", "title": "" }, { "docid": "45bf73a93f0014820864d1805f257bfc", "text": "SEPIC topology based bidirectional DC-DC Converter is proposed for interfacing energy storage elements such as batteries & super capacitors with various power systems. This proposed bidirectional DC-DC converter acts as a buck boost where it changes its output voltage according to its duty cycle. An important factor is used to increase the voltage conversion ratio as well as it achieves high efficiency. In the proposed SEPIC based BDC converter is used to increase the voltage proposal of this is low voltage at the input side is converted into a very high level at the output side to drive the HVDC smart grid. In this project PIC microcontro9 ller is used to give faster response than the existing system. The proposed scheme ensures that the voltage on the both sides of the converter is always matched thereby the conduction losses can be reduced to improve efficiency. MATLAB/Simulink software is utilized for simulation. The obtained experimental results show the functionality and feasibility of the proposed converter.", "title": "" }, { "docid": "cf7af6838ae725794653bfce39c609b8", "text": "This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence vectorization strategy, network depth and the deep feature to predict for image to sentence matching. We also generalize Word2VisualVec for matching a video to a sentence, by extending the predictive abilities to 3-D ConvNet features as well as a visual-audio representation. Experiments on four challenging image and video benchmarks detail Word2VisualVec’s properties, capabilities for image and video to sentence matching, and on all datasets its state-of-the-art results.", "title": "" }, { "docid": "e0155b21837e87dd1c7bb01635d042e9", "text": "The purpose of this paper is to provide the reader with an extensive technical analysis and review of the book, \"Multi agent Systems: A Modern Approach to Distributed Artificial Intelligence\" by Gerhard Weiss. Due to the complex nature of the topic of distributed artificial intelligence (DAT) and multi agent systems (MAS), this paper has been divided into two major segments: an overview of field and book analysis. The first section of the paper provides the reader with background information about the topic of DAT and MAS, which not only introduces the reader to the field but also assists the reader to comprehend the essential themes in such a complex field. On the other hand, the second portion of the paper provides the reader with a comprehensive review of the book from the viewpoint of a senior computer science student with an introductory knowledge of the field of artificial intelligence.", "title": "" }, { "docid": "16c05466aa84e1704b528ccac34a4004", "text": "Most cloud services are built with multi-tenancy which enables data and configuration segregation upon shared infrastructures. Each tenant essentially operates in an individual silo without interacting with other tenants. As cloud computing evolves we anticipate there will be increased need for tenants to collaborate across tenant boundaries. This will require cross-tenant trust models supported and enforced by the cloud service provider. Considering the on-demand self-service feature intrinsic to cloud computing, we propose a formal cross-tenant trust model (CTTM) and its role-based extension (RB-CTTM) integrating various types of trust relations into cross-tenant access control models which can be enforced by the multi-tenant authorization as a service (MTAaaS) platform in the cloud.", "title": "" }, { "docid": "371c3b72d33c17080968e65f1a24787d", "text": "Bullying and cyberbullying have serious consequences for all those involved, especially the victims, and its prevalence is high throughout all the years of schooling, which emphasizes the importance of prevention. This article describes an intervention proposal, made up of a program (Cyberprogram 2.0 Garaigordobil and Martínez-Valderrey, 2014a) and a videogame (Cooperative Cybereduca 2.0 Garaigordobil and Martínez-Valderrey, 2016b) which aims to prevent and reduce cyberbullying during adolescence and which has been validated experimentally. The proposal has four objectives: (1) To know what bullying and cyberbullying are, to reflect on the people involved in these situations; (2) to become aware of the harm caused by such behaviors and the severe consequences for all involved; (3) to learn guidelines to prevent and deal with these situations: know what to do when one suffers this kind of violence or when observing that someone else is suffering it; and (4) to foster the development of social and emotional factors that inhibit violent behavior (e.g., communication, ethical-moral values, empathy, cooperation…). The proposal is structured around 25 activities to fulfill these goals and it ends with the videogame. The activities are carried out in the classroom, and the online video is the last activity, which represents the end of the intervention program. The videogame (www.cybereduca.com) is a trivial pursuit game with questions and answers related to bullying/cyberbullying. This cybernetic trivial pursuit is organized around a fantasy story, a comic that guides the game. The videogame contains 120 questions about 5 topics: cyberphenomena, computer technology and safety, cybersexuality, consequences of bullying/cyberbullying, and coping with bullying/cyberbullying. To evaluate the effectiveness of the intervention, a quasi-experimental design, with repeated pretest-posttest measures and control groups, was used. During the pretest and posttest stages, 8 assessment instruments were administered. The experimental group randomly received the intervention proposal, which consisted of one weekly 1-h session during the entire school year. The results obtained with the analyses of variance of the data collected before and after the intervention in the experimental and control groups showed that the proposal significantly promoted the following aspects in the experimental group: (1) a decrease in face-to-face bullying and cyberbullying behaviors, in different types of school violence, premeditated and impulsive aggressiveness, and in the use of aggressive conflict-resolution strategies; and (2) an increase of positive social behaviors, self-esteem, cooperative conflict-resolution strategies, and the capacity for empathy. The results provide empirical evidence for the proposal. The importance of implementing programs to prevent bullying in all its forms, from the beginning of schooling and throughout formal education, is discussed.", "title": "" }, { "docid": "a10da2542efd44725a7ca499bd7019d3", "text": "During growth on fermentable substrates, such as glucose, pyruvate, which is the end-product of glycolysis, can be used to generate acetyl-CoA in the cytosol via acetaldehyde and acetate, or in mitochondria by direct oxidative decarboxylation. In the latter case, the mitochondrial pyruvate carrier (MPC) is responsible for pyruvate transport into mitochondrial matrix space. During chronological aging, yeast cells which lack the major structural subunit Mpc1 display a reduced lifespan accompanied by an age-dependent loss of autophagy. Here, we show that the impairment of pyruvate import into mitochondria linked to Mpc1 loss is compensated by a flux redirection of TCA cycle intermediates through the malic enzyme-dependent alternative route. In such a way, the TCA cycle operates in a \"branched\" fashion to generate pyruvate and is depleted of intermediates. Mutant cells cope with this depletion by increasing the activity of glyoxylate cycle and of the pathway which provides the nucleocytosolic acetyl-CoA. Moreover, cellular respiration decreases and ROS accumulate in the mitochondria which, in turn, undergo severe damage. These acquired traits in concert with the reduced autophagy restrict cell survival of the mpc1∆ mutant during chronological aging. Conversely, the activation of the carnitine shuttle by supplying acetyl-CoA to the mitochondria is sufficient to abrogate the short-lived phenotype of the mutant.", "title": "" }, { "docid": "c9aa8454246e983e9aa2752bfa667f43", "text": "BACKGROUND\nADHD is diagnosed and treated more often in males than in females. Research on gender differences suggests that girls may be consistently underidentified and underdiagnosed because of differences in the expression of the disorder among boys and girls. One aim of the present study was to assess in a clinical sample of medication naïve boys and girls with ADHD, whether there were significant gender x diagnosis interactions in co-existing symptom severity and executive function (EF) impairment. The second aim was to delineate specific symptom ratings and measures of EF that were most important in distinguishing ADHD from healthy controls (HC) of the same gender.\n\n\nMETHODS\nThirty-seven females with ADHD, 43 males with ADHD, 18 HC females and 32 HC males between 8 and 17 years were included. Co-existing symptoms were assessed with self-report scales and parent ratings. EF was assessed with parent ratings of executive skills in everyday situations (BRIEF), and neuropsychological tests. The three measurement domains (co-existing symptoms, BRIEF, neuropsychological EF tests) were investigated using analysis of variance (ANOVA) and random forest classification.\n\n\nRESULTS\nANOVAs revealed only one significant diagnosis x gender interaction, with higher rates of self-reported anxiety symptoms in females with ADHD. Random forest classification indicated that co-existing symptom ratings was substantially better in distinguishing subjects with ADHD from HC in females (93% accuracy) than in males (86% accuracy). The most important distinguishing variable was self-reported anxiety in females, and parent ratings of rule breaking in males. Parent ratings of EF skills were better in distinguishing subjects with ADHD from HC in males (96% accuracy) than in females (92% accuracy). Neuropsychological EF tests had only a modest ability to categorize subjects as ADHD or HC in males (73% accuracy) and females (79% accuracy).\n\n\nCONCLUSIONS\nOur findings emphasize the combination of self-report and parent rating scales for the identification of different comorbid symptom expression in boys and girls already diagnosed with ADHD. Self-report scales may increase awareness of internalizing problems particularly salient in females with ADHD.", "title": "" }, { "docid": "b38939ec3c6f8e10553f934ceab401ff", "text": "According to recent work in the new field of lexical pragmatics, the meanings of words are frequently pragmatically adjusted and fine-tuned in context, so that their contribution to the proposition expressed is different from their lexically encoded sense. Well-known examples include lexical narrowing (e.g. ‘drink’ used to mean ALCOHOLIC DRINK), approximation (or loosening) (e.g. ‘flat’ used to mean RELATIVELY FLAT) and metaphorical extension (e.g. ‘bulldozer’ used to mean FORCEFUL PERSON). These three phenomena are often studied in isolation from each other and given quite distinct kinds of explanation. In this chapter, we will propose a more unified account. We will try to show that narrowing, loosening and metaphorical extension are simply different outcomes of a single interpretive process which creates an ad hoc concept, or occasion-specific sense, based on interaction among encoded concepts, contextual information and pragmatic expectations or principles. We will outline an inferential account of the lexical adjustment process using the framework of relevance theory, and compare it with some alternative accounts. * This work is part of an AHRC-funded project ‘A Unified Theory of Lexical Pragmatics’ (AR16356). We are grateful to our research assistants, Patricia Kolaiti, Tim Wharton and, in particular, Rosa Vega Moreno, whose PhD work on metaphor we draw on in this paper, and to Vladimir Žegarac, François Recanati, Nausicaa Pouscoulous, Paula Rubio Fernandez and Hanna Stoever, for helpful discussions. We would also like to thank Dan Sperber for sharing with us many valuable insights on metaphor and on lexical pragmatics more generally.", "title": "" }, { "docid": "3dfd31873c3d13e8e55a9e0c5bc6ed7c", "text": "Apache Spark is an open source distributed data processing platform that uses distributed memory abstraction to process large volume of data efficiently. However, performance of a particular job on Apache Spark platform can vary significantly depending on the input data type and size, design and implementation of the algorithm, and computing capability, making it extremely difficult to predict the performance metric of a job such as execution time, memory footprint, and I/O cost. To address this challenge, in this paper, we present a simulation driven prediction model that can predict job performance with high accuracy for Apache Spark platform. Specifically, as Apache spark jobs are often consist of multiple sequential stages, the presented prediction model simulates the execution of the actual job by using only a fraction of the input data, and collect execution traces (e.g., I/O overhead, memory consumption, execution time) to predict job performance for each execution stage individually. We evaluated our prediction framework using four real-life applications on a 13 node cluster, and experimental results show that the model can achieve high prediction accuracy.", "title": "" } ]
scidocsrr
554268452a7d8e75edca8dbc4593b1cc
Volunteer computing: a model of the factors determining contribution to community-based scientific research
[ { "docid": "c435c4106b1b5c90fe3ff607bc0d5f00", "text": "In recent years, we have witnessed a significant growth of “social computing” services, or online communities where users contribute content in various forms, including images, text or video. Content contribution from members is critical to the viability of these online communities. It is therefore important to understand what drives users to share content with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with users’ photo sharing in an online community, drawing on motivation theories as well as on analysis of basic structural properties. Our results indicate that photo sharing declines in respect to the users’ tenure in the community. We also show that users with higher commitment to the community and greater “structural embeddedness” tend to share more content. We demonstrate that the motivation of self-development is negatively related to photo sharing, and that tenure in the community moderates the effect of self-development on photo sharing. Directions for future research, as well as implications for theory and practice are discussed.", "title": "" } ]
[ { "docid": "cae689b8a27b05318088a16eaccd85b4", "text": "In recent years, electronic product have been demanded more functionalities, miniaturization, higher performance, reliability and low cost. Therefore, IC chip is required to deliver more signal I/O and better electrical characteristics under the same package footprint. None-Lead Bump Array (NBA) Chip Scale Structure is then developed to meet those requirements offering better electrical performance, more I/O accommodation and high transmission speed. To evaluate NBA package capability, the solder joint life, package warpage, die corner stress and thermal performance are characterized. Firstly, investigations on the warpage, die corner stress and thermal performance of NBA-QFN structure are performed by the use of Finite Element Method (FEM). Secondly, experiments are conducted for the solder joint reliability performance with different solder coverage and standoff height In the conclusion of this study, NBA-QFN would have no warpage risk, lower die corner stress and better thermal performance than TFBGA from simulation result. Beside that, the simulation result shows good agreement with experimental data. From the drop test study, with solder coverage less than 50% and standoff height lower than 40um would perform better solder joint life than others.", "title": "" }, { "docid": "3ba10a680a5204b8242203e053fc3379", "text": "Recommender system has been more and more popular and widely used in many applications recently. The increasing information available, not only in quantities but also in types, leads to a big challenge for recommender system that how to leverage these rich information to get a better performance. Most traditional approaches try to design a specific model for each scenario, which demands great efforts in developing and modifying models. In this technical report, we describe our implementation of feature-based matrix factorization. This model is an abstract of many variants of matrix factorization models, and new types of information can be utilized by simply defining new features, without modifying any lines of code. Using the toolkit, we built the best single model reported on track 1 of KDDCup’11.", "title": "" }, { "docid": "a60d79008bfb7cccee262667b481d897", "text": "It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers.", "title": "" }, { "docid": "b7d76bc189aa2e99886abcaddce7d61d", "text": "Currently, face recognition system is growing sustainably on a larger scope. A few years ago, face recognition was used as a personal identification with a limited scope, now this technology has grown in the field of security, in terms of preventing fraudsters, criminals, and terrorists. In addition, face recognition is also used in detecting how tired a driver is, reducing the occurrence of road accidents, as well as in marketing, advertising, health, and others. Many method are developed to give the best accuracy in face recognition. Deep learning approach become trend in this field because of stunning results, and fast computation. However, the problem about accuracy, complexity, and scalability become a challenges in face recognition. This paper focus on recognizing the importance of this technology, how to achieve high accuracy with low complexity. Deep learning and non-deep learning methods are discussed and compared to analyze their advantages and disadvantages. From critical analysis using experiment with YALE dataset, non-deep learning algorithm can reach up to 90.6% for low-high complexity and 94.67% in deep learning method for low-high complexity. Genetic algorithm combining with CNN and SVM was an optimization method for overcome accuracy and complexity problems.", "title": "" }, { "docid": "1f0c842e4e2158daa586d9ee46a0d52a", "text": "The ability to accurately identify the network traffic associated with different P2P applications is important to a broad range of network operations including application-specific traffic engineering, capacity planning, provisioning, service differentiation,etc. However, traditional traffic to higher-level application mapping techniques such as default server TCP or UDP network-port baseddisambiguation is highly inaccurate for some P2P applications.In this paper, we provide an efficient approach for identifying the P2P application traffic through application level signatures. We firstidentify the application level signatures by examining some available documentations, and packet-level traces. We then utilize the identified signatures to develop online filters that can efficiently and accurately track the P2P traffic even on high-speed network links.We examine the performance of our application-level identification approach using five popular P2P protocols. Our measurements show thatour technique achieves less than 5% false positive and false negative ratios in most cases. We also show that our approach only requires the examination of the very first few packets (less than 10packets) to identify a P2P connection, which makes our approach highly scalable. Our technique can significantly improve the P2P traffic volume estimates over what pure network port based approaches provide. For instance, we were able to identify 3 times as much traffic for the popular Kazaa P2P protocol, compared to the traditional port-based approach.", "title": "" }, { "docid": "73577e88b085e9e187328ce36116b761", "text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.", "title": "" }, { "docid": "23e32a61107fe286e432d5f2ecda7bad", "text": "How do we scale information extraction to the massive size and unprecedented heterogeneity of the Web corpus? Beginning in 2003, our KnowItAll project has sought to extract high-quality knowledge from the Web. In 2007, we introduced the Open Information Extraction (Open IE) paradigm which eschews handlabeled training examples, and avoids domainspecific verbs and nouns, to develop unlexicalized, domain-independent extractors that scale to the Web corpus. Open IE systems have extracted billions of assertions as the basis for both commonsense knowledge and novel question-answering systems. This paper describes the second generation of Open IE systems, which rely on a novel model of how relations and their arguments are expressed in English sentences to double precision/recall compared with previous systems such as TEXTRUNNER and WOE.", "title": "" }, { "docid": "fe70e1e6a00fec08f768669f152fd9e4", "text": "Numerous efforts in balancing the trade-off between power, area and performance have been done in the medium performance, medium power region of the design spectrum. However, not much study has been done at the two extreme ends of the design spectrum, namely the ultra-low power with acceptable performance at one end (the focus of this paper), and high performance with power within limit at the other. One solution to achieve the ultra-low power requirement is to operate the digital logic gates in subthreshold region. We analyze both CMOS and Pseudo-NMOS logic families operating in subthreshold region. We compare the results with CMOS in normal strong inversion region and with other known low-power logic, namely, energy recovery logic. Our results show an energy per switching reduction of two orders of magnitude for an 8x8 carry save array multiplier when it is operated in subthreshold region.", "title": "" }, { "docid": "cd2935a0fb6e4ecc2b6fb19651ba72a2", "text": "Thermal cameras have historically been of interest mainly for military applications. Increasing image quality and resolution combined with decreasing price and size during recent years have, however, opened up new application areas. They are now widely used for civilian applications, e.g., within industry, to search for missing persons, in automotive safety, as well as for medical applications. Thermal cameras are useful as soon as it is possible to measure a temperature difference. Compared to cameras operating in the visual spectrum, they are advantageous due to their ability to see in total darkness, robustness to illumination variations, and less intrusion on privacy. This thesis addresses the problem of detection and tracking in thermal infrared imagery. Visual detection and tracking of objects in video are research areas that have been and currently are subject to extensive research. Indications of their popularity are recent benchmarks such as the annual Visual Object Tracking (VOT) challenges, the Object Tracking Benchmarks, the series of workshops on Performance Evaluation of Tracking and Surveillance (PETS), and the workshops on Change Detection. Benchmark results indicate that detection and tracking are still challenging problems. A common belief is that detection and tracking in thermal infrared imagery is identical to detection and tracking in grayscale visual imagery. This thesis argues that the preceding allegation is not true. The characteristics of thermal infrared radiation and imagery pose certain challenges to image analysis algorithms. The thesis describes these characteristics and challenges as well as presents evaluation results confirming the hypothesis. Detection and tracking are often treated as two separate problems. However, some tracking methods, e.g. template-based tracking methods, base their tracking on repeated specific detections. They learn a model of the object that is adaptively updated. That is, detection and tracking are performed jointly. The thesis includes a template-based tracking method designed specifically for thermal infrared imagery, describes a thermal infrared dataset for evaluation of templatebased tracking methods, and provides an overview of the first challenge on shortterm, single-object tracking in thermal infrared video. Finally, two applications employing detection and tracking methods are presented.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "c0656a6691370928838a5fe4dd810aa0", "text": "Using GPUs as general-purpose processors has revolutionized parallel computing by providing, for a large and growing set of algorithms, massive data-parallelization on desktop machines. An obstacle to their widespread adoption, however, is the difficulty of programming them and the low-level control of the hardware required to achieve good performance. This paper proposes a programming approach, SafeGPU, that aims to make GPU data-parallel operations accessible through high-level libraries for object-oriented languages, while maintaining the performance benefits of lower-level code. The approach provides data-parallel operations for collections that can be chained and combined to express compound computations, with data synchronization and device management all handled automatically. It also integrates the design-by-contract methodology, which increases confidence in functional program correctness by embedding executable specifications into the program text. We present a prototype of SafeGPU for Eiffel, and show that it leads to modular and concise code that is accessible for GPGPU non-experts, while still providing performance comparable with that of hand-written CUDA code. We also describe our first steps towards porting it to C#, highlighting some challenges, solutions, and insights for implementing the approach in different managed languages. Finally, we show that runtime contract-checking becomes feasible in SafeGPU, as the contracts can be executed on the GPU.", "title": "" }, { "docid": "53d41fb8e188add204ba96669715b49a", "text": "A nationwide survey was conducted to investigate the prevalence of video game addiction and problematic video game use and their association with physical and mental health. An initial sample comprising 2,500 individuals was randomly selected from the Norwegian National Registry. A total of 816 (34.0 percent) individuals completed and returned the questionnaire. The majority (56.3 percent) of respondents used video games on a regular basis. The prevalence of video game addiction was estimated to be 0.6 percent, with problematic use of video games reported by 4.1 percent of the sample. Gender (male) and age group (young) were strong predictors for problematic use of video games. A higher proportion of high frequency compared with low frequency players preferred massively multiplayer online role-playing games, although the majority of high frequency players preferred other game types. Problematic use of video games was associated with lower scores on life satisfaction and with elevated levels of anxiety and depression. Video game use was not associated with reported amount of physical exercise.", "title": "" }, { "docid": "17cfa416085dd33f7ecdd3a680cb4265", "text": "The epidermal growth factor receptor (EGFR) is a receptor tyrosine kinase that is frequently mutated or overexpressed in a large number of tumors such as carcinomas or glioblastoma. Inhibitors of EGFR activation have been successfully established for the therapy of some cancers and are more and more frequently being used as first or later line therapies. Although the side effects induced by inhibitors of EGFR are less severe than those observed with classic cytotoxic chemotherapy and can usually be handled by out-patient care, they may still be a cause for dose reduction or discontinuation of treatment that can reduce the effectiveness of antitumor therapy. The mechanisms underlying these cutaneous side effects are only partly understood. Important questions, such as the reasons for the correlation between the intensity of the side effects and the efficiency of treatment with EGFR inhibitors, remain to be answered. Optimized adjuvant strategies to accompany anti-EGFR therapy need to be found for optimal therapeutic application and improved quality of life of patients. Here, we summarize current literature on the molecular and cellular mechanisms underlying the cutaneous side effects induced by EGFR inhibitors and provide evidence that keratinocytes are probably the optimal targets for adjuvant therapy aimed at alleviating skin toxicities.", "title": "" }, { "docid": "e9ba9af6b349c5e79b21dac2d5f8e845", "text": "Context: Software defect prediction is important for identification of defect-prone parts of a software. Defect prediction models can be developed using software metrics in combination with defect data for predicting defective classes. Various studies have been conducted to find the relationship between software metrics and defect proneness, but there are few studies that statistically determine the effectiveness of the results. Objective: The main objectives of the study are (i) comparison of the machine-learning techniques using data sets obtained from popular open source software (ii) use of appropriate performance measures for measuring the performance of defect prediction models (iii) use of statistical tests for effective comparison of machine-learning techniques and (iv) validation of models over different releases of data sets. Method: In this study we use object-oriented metrics for predicting defective classes using 18 machinelearning techniques. The proposed framework has been applied to seven application packages of well known, widely used Android operating system viz. Contact, MMS, Bluetooth, Email, Calendar, Gallery2 and Telephony. The results are validated using 10-fold and inter-release validation methods. The reliability and significance of the results are evaluated using statistical test and post-hoc analysis. Results: The results show that the area under the curve measure for Naïve Bayes, LogitBoost and Multilayer Perceptron is above 0.7 in most of the cases. The results also depict that the difference between the ML techniques is statistically significant. However, it is also proved that the Support Vector Machines based techniques such as Support Vector Machines and voted perceptron do not possess the predictive capability for predicting defects. Conclusion: The results confirm the predictive capability of various ML techniques for developing defect prediction models. The results also confirm the superiority of one ML technique over the other ML techniques. Thus, the software engineers can use the results obtained from this study in the early phases of the software development for identifying defect-prone classes of given software. © 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "719794106634ad35bedce96305bef83a", "text": "With recent advances in mobile computing, the demand for visual localization or landmark identification on mobile devices is gaining interest. We advance the state of the art in this area by fusing two popular representations of street-level image data — facade-aligned and viewpoint-aligned — and show that they contain complementary information that can be exploited to significantly improve the recall rates on the city scale. We also improve feature detection in low contrast parts of the street-level data, and discuss how to incorporate priors on a user's position (e.g. given by noisy GPS readings or network cells), which previous approaches often ignore. Finally, and maybe most importantly, we present our results according to a carefully designed, repeatable evaluation scheme and make publicly available a set of 1.7 million images with ground truth labels, geotags, and calibration data, as well as a difficult set of cell phone query images. We provide these resources as a benchmark to facilitate further research in the area.", "title": "" }, { "docid": "e81f1caa398de7f56a70cc4db18d58db", "text": "UNLABELLED\nThis study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05) but no significant difference was found between races. Out of the 286 subjects, 49 (17.1%) were of ideal facial shape, 156 (54.5%) short and 81 (28.3%) long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.\n\n\nIN CONCLUSION\n1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.", "title": "" }, { "docid": "3cf60753c37f2520188b26e67e243b6c", "text": "The growing dependence of critical infrastructures and industrial automation on interconnected physical and cyber-based control systems has resulted in a growing and previously unforeseen cyber security threat to supervisory control and data acquisition (SCADA) and distributed control systems (DCSs). It is critical that engineers and managers understand these issues and know how to locate the information they need. This paper provides a broad overview of cyber security and risk assessment for SCADA and DCS, introduces the main industry organizations and government groups working in this area, and gives a comprehensive review of the literature to date. Major concepts related to the risk assessment methods are introduced with references cited for more detail. Included are risk assessment methods such as HHM, IIM, and RFRM which have been applied successfully to SCADA systems with many interdependencies and have highlighted the need for quantifiable metrics. Presented in broad terms is probability risk analysis (PRA) which includes methods such as FTA, ETA, and FEMA. The paper concludes with a general discussion of two recent methods (one based on compromise graphs and one on augmented vulnerability trees) that quantitatively determine the probability of an attack, the impact of the attack, and the reduction in risk associated with a particular countermeasure.", "title": "" }, { "docid": "6c9c06604d5ef370b803bb54b4fe1e0c", "text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "title": "" }, { "docid": "68288cbb20c43b2f1911d6264cc81a6c", "text": "Folliculitis decalvans is an inflammatory presentation of cicatrizing alopecia characterized by inflammatory perifollicular papules and pustules. It generally occurs in adult males, predominantly involving the vertex and occipital areas of the scalp. The use of dermatoscopy in hair and scalp diseases improves diagnostic accuracy. Some trichoscopic findings, such as follicular tufts, perifollicular erythema, crusts and pustules, can be observed in folliculitis decalvans. More research on the pathogenesis and treatment options of this disfiguring disease is required for improving patient management.", "title": "" } ]
scidocsrr
fac637e3db8bd59b12bf162236400026
Literature Review and General Consideration of Energy Efficient Routing Protocols in MANETs
[ { "docid": "7785c16b3d0515057c8a0ec0ed55b5de", "text": "Most ad hoc mobile devices today operate on batteries. Hence, power consumption becomes an important issue. To maximize the lifetime of ad hoc mobile networks, the power consumption rate of each node must be evenly distributed, and the overall transmission power for each connection request must be minimized. These two objectives cannot be satisfied simultaneously by employing routing algorithms proposed in previous work. In this article we present a new power-aware routing protocol to satisfy these two constraints simultaneously; we also compare the performance of different types of power-related routing algorithms via simulation. Simulation results confirm the need to strike a balance in attaining service availability performance of the whole network vs. the lifetime of ad hoc mobile devices.", "title": "" } ]
[ { "docid": "bd4d6e83ccf5da959dac5bbc174d9d6f", "text": "This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D structure from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.", "title": "" }, { "docid": "60edfab6fa5f127dd51a015b20d12a68", "text": "We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.", "title": "" }, { "docid": "83f59014cebd1f0fb65d76b7239194e1", "text": "The increase in volume and sensitivity of data communicated and processed over the Internet has been accompanied by a corresponding need for e-commerce techniques in which entities can participate in a secure and anonymous fashion. Even simple arithmetic operations over a set of integers partitioned over a network require sophisticated algorithms. As a part of our earlier work, we have developed a secure protocol for computing dot products of two vectors. In this paper,we present a secure protocol for Yao’s millionaires’ problem. In this problem, each of the two participating parties have a number and the objective is to determine whose number is larger without disclosing any information about the numbers. This problem has direct applications in on-line bidding and auctions. Furthermore, combined with a secure dot-product, a solution to this secure multiparty computation provides necessary building blocks for such basic operations as frequent item-set generation in association rule mining. Although an asymptotically optimal solution for the secure multiparty computation of the ‘less-or-equal’ predicate exists in literature, this protocol is not suited for practical applications. Here, we present a protocol which has a much simpler structure and is more efficient for numbers in ranges practically encountered in typical ecommerce applications. Furthermore, advances in cryptanalysis and the subsequent increase in key lengths for public-key cryptographic systems accentuate the advantage of the proposed protocol. We present experimental evidence demonstrating the efficiency of the proposed protocol both in terms of time and communication overhead.", "title": "" }, { "docid": "9faf87e51078bb92f146ba4d31f04c7f", "text": "This paper first describes the problem of goals nonreachable with obstacles nearby when using potential field methods for mobile robot path planning. Then, new repulsive potential functions are presented by taking the relative distance between the robot and the goal into consideration, which ensures that the goal position is the global minimum of the total potential.", "title": "" }, { "docid": "1d8e2c9bd9cfa2ce283e01cbbcd6ca83", "text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.", "title": "" }, { "docid": "5bd9b0de217f2a537a5fadf99931d149", "text": "A linear programming (LP) method for security dispatch and emergency control calculations on large power systems is presented. The method is reliable, fast, flexible, easy to program, and requires little computer storage. It works directly with the normal power-system variables and limits, and incorporates the usual sparse matrix techniques. An important feature of the method is that it handles multi-segment generator cost curves neatly and efficiently.", "title": "" }, { "docid": "aef55420ff44872ee35ecfd4cd6528e0", "text": "Data quality and especially the assessment of data quality have been intensively discussed in research and practice alike. To support an economically oriented management of data quality and decision making under uncertainty, it is essential to assess the data quality level by means of well-founded metrics. However, if not adequately defined, these metrics can lead to wrong decisions and economic losses. Therefore, based on a decision-oriented framework, we present a set of five requirements for data quality metrics. These requirements are relevant for a metric that aims to support an economically oriented management of data quality and decision making under uncertainty. We further demonstrate the applicability and efficacy of these requirements by evaluating five data quality metrics for different data quality dimensions. Moreover, we discuss practical implications when applying the presented requirements.", "title": "" }, { "docid": "ef2898e76ab581478b87674356185c2d", "text": "This paper presents the theory, design procedure, and implementation of a dual-band planar quadrature hybrid with enhanced bandwidth. The topology of the circuit is a three-branch-line (3-BL) quadrature hybrid, which provides much larger flexibility to allocate the desired operating frequencies and necessary bandwidths than other previously published configurations. A performance comparison with other dual-band planar topologies is presented. Finally, a 3-BL quadrature hybrid for dual band (2.4 and 5 GHz) wireless local area network systems was fabricated, aimed to cover the bands corresponding to the standards IEEE802.11a/b. The measurements show a 16% and 18% bandwidth for the lower and upper frequency, respectively, satisfying and exceeding the bandwidth requirements for the above standards", "title": "" }, { "docid": "d486fca984c9cf930a4d1b4367949016", "text": "In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.", "title": "" }, { "docid": "e0dfdc0d6a8a8cfd9834fc9873389b10", "text": "In this paper we study how to build an effective incremental crawler. The crawler selectively and incrementally updates its index and/or local collection of web pages, instead of periodically refreshing the collection in batch mode. The incremental crawler can improve the “freshness” of the collection significantly and bring in new pages in a more timely manner. We first present results from an experiment conducted on more than half million web pages over 4 months, to estimate how web pages evolve over time. Based on these experimental results, we compare various design choices for an incremental crawler and discuss their trade-offs. We propose an architecture for the incremental crawler, which combines the best design choices.", "title": "" }, { "docid": "28bad4ad262e5b0e39c1ecc981fbe9ce", "text": "In FFT computation, the butterflies play a central role, since they allow the calculation of complex terms. Therefore, the optimization of the butterfly can contribute for the power reduction in FFT architectures. In this paper we exploit different addition schemes in order to improve the efficiency of 16 bit-width radix-2 and radix-4 FFT butterflies. Combinations of simultaneous addition of three and seven operands are inserted in the structures of the butterflies in order to produce power-efficient structures. The used additions schemes include Carry Save Adder (CSA), and adder compressors. The radix-2 and radix-4 butterflies were implemented in hardware description language and synthesized to 45nm Nangate Open Cell Library using Cadence RTL Compiler. The main results show that both radix-2 and radix-4 butterflies, with CSA, are more efficient when compared with the same structures with other adder circuits.", "title": "" }, { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" }, { "docid": "1df73f7558216e726e6165f09dec2222", "text": "This paper presents a method for constructing human-robot interaction policies in settings where multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision making. We are motivated in this work by the example of traffic weaving, e.g., at highway on-ramps/off-ramps, where entering and exiting cars must swap lanes in a short distance-a challenging negotiation even for experienced drivers due to the inherent multimodal uncertainty of who will pass whom. Our approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment model through massively parallel sampling of human responses to candidate robot action sequences. Direct learning of these distributions is made possible by recent advances in the theory of conditional variational autoencoders (CVAEs), whereby we learn action distributions simultaneously conditioned on the present interaction history, as well as candidate future robot actions in order to take into account response dynamics. We demonstrate the efficacy of this approach with a human-in-the-loop simulation of a traffic weaving scenario.", "title": "" }, { "docid": "616280024e85264e542df70d1e7766cf", "text": "Cable-driven parallel manipulators (CDPMs) are a special class of parallel manipulators that are driven by cables instead of rigid links. So CDPMs have a light-weight structure with large reachable workspace. The aim of this paper is to provide the kinematic analysis and the design optimization of a cable-driven 2-DOF module, comprised of a passive universal joint, for a reconfigurable system. This universal joint module can be part of a modular reconfigurable system where various cable-driven modules can be attached serially into many different configurations. Based on a symmetric design approach, six topological configurations are enumerated with three or four cables arrangements. With a variable constrained axis, the structure matrix of the universal joint has to be formulated with respect to the intermediate frame. The orientation workspace of the universal joint is a submanifold of SO(3). Therefore, the workspace representation is a plane in R2. With the integral measure for the submanifold expressed as a cosine function of one of the angles of rotation, an equivolumetric method is employed to numerically calculate the workspace volume. The orientation workspace volume of the universal joint module is found to be 2π. Optimization results show that the 4-1 cable arrangement produces the largest workspace with better Global Conditioning Index.", "title": "" }, { "docid": "4d9785b277de710693dc06659dd8ac89", "text": "It is useful to predict future values in time series data, for example when there are many sensors monitoring environments such as urban space. The Gaussian Process (GP) model is considered as a promising technique for this setting. However, the GP model requires too high a training cost to be tractable for large data. Though approximation methods have been proposed to improve GP's scalability, they usually can only capture global trends in the data and fail to preserve small-scale patterns, resulting in unsatisfactory performance.\n We propose a new method to apply the GP for sensor time series prediction. Instead of (eagerly) training GPs on entire datasets, we custom-build query-dependent GPs on small fractions of the data for each prediction request.\n Implementing this idea in practice at scale requires us to overcome two obstacles. On the one hand, a central challenge with such a semi-lazy learning model is the substantial model-building effort at kNN query time, which could lead to unacceptable latency. We propose a novel two-level inverted-like index to support kNN search using the DTW on the GPU, making such \"just-in-time\" query-dependent model construction feasible for real-time applications.\n On the other hand, several parameters should be tuned for each time series individually since different sensors have different data generating processes in diverse environments. Manually configuring the parameters is usually not feasible due to the large number of sensors. To address this, we devise an adaptive auto-tuning mechanism to automatically determine and dynamically adjust the parameters for each time series with little human assistance.\n Our method has the following strong points: (a) it can make prediction in real time without a training phase; (b) it can yield superior prediction accuracy; and (c) it can effectively estimate the analytical predictive uncertainty.\n To illustrate our points, we present SMiLer, a semi-lazy time series prediction system for sensors. Extensive experiments on real-world datasets demonstrate its effectiveness and efficiency. In particular, by devising a two-level inverted-like index on the GPU with an enhanced lower bound of the DTW, SMiLer accelerates the efficiency of kNN search by one order of magnitude over its baselines. The prediction accuracy of SMiLer is better than the state-of-the-art competitors (up to 10 competitors) with better estimation of predictive uncertainty.", "title": "" }, { "docid": "5764bcf220280c4c3be28375cdcbce26", "text": "This paper introduces a data-driven process for designing and fabricating materials with desired deformation behavior. Our process starts with measuring deformation properties of base materials. For each base material we acquire a set of example deformations, and we represent the material as a non-linear stress-strain relationship in a finite-element model. We have validated our material measurement process by comparing simulations of arbitrary stacks of base materials with measured deformations of fabricated material stacks. After material measurement, our process continues with designing stacked layers of base materials. We introduce an optimization process that finds the best combination of stacked layers that meets a user's criteria specified by example deformations. Our algorithm employs a number of strategies to prune poor solutions from the combinatorial search space. We demonstrate the complete process by designing and fabricating objects with complex heterogeneous materials using modern multi-material 3D printers.", "title": "" }, { "docid": "bec0f34d29c2ed7e9cfda66b050a1eb8", "text": "......................................................................................................................................... iii Acknowledgements ........................................................................................................................ iii 1.0 Overview ............................................................................................................................. 1 2.0 Bridge Evaluation Process .................................................................................................. 6 2.1 Inspection Basics .......................................................................................................... 6 2.2 Visual Inspection .......................................................................................................... 7 2.3 Defects .......................................................................................................................... 7 2.4 Traditional Inspection Tools ......................................................................................... 9 2.5 Advanced Inspection Techniques ................................................................................. 9 2.6 Condition Rating......................................................................................................... 10 3.0 In-Situ Monitoring Techniques ......................................................................................... 13 3.1 Accelerometers and Velocimeters .............................................................................. 13 3.2 Electrical Resistance ................................................................................................... 14 3.3 Electromechanical Impedance .................................................................................... 15 3.4 Fiber Optics ................................................................................................................ 16 3.5 GPS and Geodetic Measurements .............................................................................. 19 3.6 Magnetic and Magneto-Elastic ................................................................................... 20 3.7 Ultrasonic Emissions and Lamb Waves ..................................................................... 22 4.0 On-Site Monitoring Techniques ....................................................................................... 25 4.1 Eddy Currents ............................................................................................................. 25 4.2 Electrical Time-Domain Reflectometry (TDR) .......................................................... 26 4.3 Infrared Thermography and Spectroscopy ................................................................. 26 4.4 Laser Scanning ........................................................................................................... 29 4.5 Nuclear Magnetic Resonance (NMR) Imaging .......................................................... 31 4.6 Microwave Radar ....................................................................................................... 33 4.7 Ground-Penetrating Radar (GPR) .............................................................................. 34 4.8 X-Ray, Gamma Ray, and Neutron Radiography ........................................................ 36 5.0 Remote Monitoring Techniques ....................................................................................... 39 5.1 Electro-Optical Imagery and Photogrammetry........................................................... 39 5.2 Speckle Photography and Speckle Pattern Interferometry ......................................... 40 5.3 Interferometric Synthetic Aperture Radar (IfSAR) .................................................... 43 6.0 Exceptional Materials and Structures ............................................................................... 45 6.1 Fiber-Reinforced Polymer Composites ...................................................................... 45 7.0 Case Studies ...................................................................................................................... 49 7.1 Commodore Barry Bridge, Philadelphia, PA ............................................................. 49 7.2 Golden Gate Bridge, San Francisco, CA .................................................................... 50", "title": "" }, { "docid": "75abbacacec7a018fadf4829d1a3084d", "text": "BACKGROUND\nThe fertilizer use efficiency (FUE) of agricultural crops is generally low, which results in poor crop yields and low economic benefits to farmers. Among the various approaches used to enhance FUE, impregnation of mineral fertilizers with plant growth-promoting bacteria (PGPB) is attracting worldwide attention. The present study was aimed to improve growth, yield and nutrient use efficiency of wheat by bacterially impregnated mineral fertilizers.\n\n\nRESULTS\nResults of the pot study revealed that impregnation of diammonium phosphate (DAP) and urea with PGPB was helpful in enhancing the growth, yield, photosynthetic rate, nitrogen use efficiency (NUE) and phosphorus use efficiency (PUE) of wheat. However, the plants treated with F8 type DAP and urea, prepared by coating a slurry of PGPB (Bacillus sp. strain KAP6) and compost on DAP and urea granules at the rate of 2.0 g 100 g-1 fertilizer, produced better results than other fertilizer treatments. In this treatment, growth parameters including plant height, root length, straw yield and root biomass significantly (P ≤ 0.05) increased from 58.8 to 70.0 cm, 41.2 to 50.0 cm, 19.6 to 24.2 g per pot and 1.8 to 2.2 g per pot, respectively. The same treatment improved grain yield of wheat by 20% compared to unimpregnated DAP and urea (F0). Likewise, the maximum increase in photosynthetic rate, grain NP content, grain NP uptake, NUE and PUE of wheat were also recorded with F8 treatment.\n\n\nCONCLUSION\nThe results suggest that the application of bacterially impregnated DAP and urea is highly effective for improving growth, yield and FUE of wheat. © 2017 Society of Chemical Industry.", "title": "" }, { "docid": "b2768017b8db6d8d4d0697800a556a49", "text": "The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization). To date, evidence of this phenomenon has been indirect and aroused controversy due to theoretical and practical complications. In particular, it has been pointed out that the MI with the input is theoretically infinite in many cases of interest, and that the MI with the target is fundamentally difficult to estimate in high dimensions. As a consequence, the validity of this theory has been questioned. In this paper, we overcome these obstacles by two means. First, as previously suggested, we replace the MI with the input by a noise-regularized version, which ensures it is finite. As we show, this modified penalty in fact acts as a form of weight-decay regularization. Second, to obtain accurate (noise regularized) MI estimates between an intermediate representation and the input, we incorporate the strong prior-knowledge we have about their relation, into the recently proposed MI estimator of Belghazi et al. (2018). With this scheme, we are able to stably train each layer independently to explicitly optimize the IB functional. Surprisingly, this leads to enhanced prediction accuracy, thus directly validating the IB theory of deep nets for the first time.", "title": "" }, { "docid": "4e29bdddbdeb5382347a3915dc7048de", "text": "Accuracy and robustness with respect to missing or corrupt input data are two key characteristics for any travel time prediction model that is to be applied in a real-time environment (e.g. for display on variable message signs on freeways). This article proposes a freeway travel time prediction framework that exhibits both qualities. The framework exploits a recurrent neural network topology, the so-called statespace neural network (SSNN), with preprocessing strategies based on imputation. Although the SSNN model is a neural network, its design (in terms of inputand model selection) is not ‘‘black box’’ nor location-specific. Instead, it is based on the lay-out of the freeway stretch of interest. In this sense, the SSNN model combines the generality of neural network approaches, with traffic related (‘‘white-box’’) design. Robustness to missing data is tackled by means of simple imputation (data replacement) schemes, such as exponential forecasts and spatial interpolation. Although there are clear theoretical shortcomings to ‘‘simple’’ imputation schemes to remedy input failure, our results indicate that their use is justified in this particular application. The SSNN model appears to be robust to the ‘‘damage’’ done by these imputation schemes. This is true for both incidental (random) and structural input failure. We demonstrate that the SSNN travel time prediction framework yields good accurate and robust travel time predictions on both synthetic and real data. 2005 Elsevier Ltd. All rights reserved. 0968-090X/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.trc.2005.03.001 * Corresponding author. E-mail address: [email protected] (J.W.C. van Lint). 348 J.W.C. van Lint et al. / Transportation Research Part C 13 (2005) 347–369", "title": "" } ]
scidocsrr
5ee1669c93fc3f9580aec5cc542d8c24
A Mutual Authentication and Key Establishment Scheme for M2M Communication in 6LoWPAN Networks
[ { "docid": "0dfba09dc9a01e4ebca16eb5688c81aa", "text": "Machine-to-Machine (M2M) refers to technologies with various applications. In order to provide the vision and goals of M2M, an M2M ecosystem with a service platform must be established by the key players in industrial domains so as to substantially reduce development costs and improve time to market of M2M devices and services. The service platform must be supported by M2M enabling technologies and standardization. In this paper, we present a survey of existing M2M service platforms and explore the various research issues and challenges involved in enabling an M2M service platform. We first classify M2M nodes according to their characteristics and required functions, and we then highlight the features of M2M traffic. With these in mind, we discuss the necessity of M2M platforms. By comparing and analyzing the existing approaches and solutions of M2M platforms, we identify the requirements and functionalities of the ideal M2M service platform. Based on these, we propose an M2M service platform (M2SP) architecture and its functionalities, and present the M2M ecosystem with this platform. Different application scenarios are given to illustrate the interaction between the components of the proposed platform. In addition, we discuss the issues and challenges of enabling technologies and standardization activities, and outline future research directions for the M2M network.", "title": "" }, { "docid": "da5562859bfed0057e0566679a4aca3d", "text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.", "title": "" } ]
[ { "docid": "4caaa5bf0ffbbf5c361680fbc4ad7d99", "text": "In this paper we present a pipeline for automatic detection of traffic signs in images. The proposed system can deal with high appearance variations, which typically occur in traffic sign recognition applications, especially with strong illumination changes and dramatic scale changes. Unlike most existing systems, our pipeline is based on interest regions extraction rather than a sliding window detection scheme. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of Mandatory, Prohibitory and Danger traffic signs. Our proposal has been evaluated experimentally within the German Traffic Sign Detection Benchmark competition.", "title": "" }, { "docid": "3c41bdaeaaa40481c8e68ad00426214d", "text": "Image captioning is an important task, applicable to virtual assistants, editing tools, image indexing, and support of the disabled. In recent years significant progress has been made in image captioning, using Recurrent Neural Networks powered by long-short-term-memory (LSTM) units. Despite mitigating the vanishing gradient problem, and despite their compelling ability to memorize dependencies, LSTM units are complex and inherently sequential across time. To address this issue, recent work has shown benefits of convolutional networks for machine translation and conditional image generation [9, 34, 35]. Inspired by their success, in this paper, we develop a convolutional image captioning technique. We demonstrate its efficacy on the challenging MSCOCO dataset and demonstrate performance on par with the LSTM baseline [16], while having a faster training time per number of parameters. We also perform a detailed analysis, providing compelling reasons in favor of convolutional language generation approaches.", "title": "" }, { "docid": "2a78ef9f2d3fb35e1595a6ffca20851b", "text": "Is AI antithetical to good user interface design? From the earliest times in the development of computers, activities in human-computer interaction (HCI) and AI have been intertwined. But as subfields of computer science, HCI and AI have always had a love-hate relationship. The goal of HCI is to make computers easier to use and more helpful to their users. The goal of artificial intelligence is to model human thinking and to embody those mechanisms in computers. How are these goals related? Some in HCI have seen these goals sometimes in opposition. They worry that the heuristic nature of many AI algorithms will lead to unreliability in the interface. They worry that AI’s emphasis on mimicking human decision-making functions might usurp the decision-making prerogative of the human user. These concerns are not completely without merit. There are certainly many examples of failed attempts to prematurely foist AI on the public. These attempts gave AI a bad name, at least at the time. But so too have there been failed attempts to popularize new HCI approaches. The first commercial versions of window systems, such as the Xerox Star and early versions of Microsoft Windows, weren’t well accepted at the time of their introduction. Later design iterations of window systems, such as the Macintosh and Windows 3.0, finally achieved success. Key was that these early failures did not lead their developers to conclude window systems were a bad idea. Researchers shouldn’t construe these (perceived) AI failures as a refutation of the idea of AI in interfaces. Modern PDA, smartphone, and tablet computers are now beginning to have quite usable handwriting recognition. Voice recognition is being increasingly employed on phones, and even in the noisy environment of cars. Animated agents, more polite, less intrusive, and better thought out, might also make a", "title": "" }, { "docid": "e1d635202eb482e49ff736fd37d161ac", "text": "Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.", "title": "" }, { "docid": "ca2e577e819ac49861c65bfe8d26f5a1", "text": "A design of a delay based self-oscillating class-D power amplifier for piezoelectric actuators is presented and modelled. First order and second order configurations are discussed in detail and analytical results reveal the stability criteria of a second order system, which should be respected in the design. It also shows if the second order system converges, it will tend to give a correct pulse modulation regarding to the input modulation index. Experimental results show the effectiveness of this design procedure. For a piezoelectric load of 400 nF, powered by a 150 V 10 kHz sinusoidal signal, a total harmonic distortion (THD) of 4.3% is obtained.", "title": "" }, { "docid": "376d1c5d7ab0a9930e8d6da956c8f412", "text": "The accuracy of the dinical diagnosis of cutaneous melanoma with the unaided eye is only about 60%. Dermoscopy, a non-invasive, in vivo technique for the microscopic examination of pigmented skin lesions, has the potential to improve the diagnostic accuracy. Our objectives were to review previous publications, to compare the accuracy of melanoma diagnosis with and without dermoscopy, and to assess the influence of study characteristics on the diagnostic accuracy. We searched for publications between 1987 and 2000 and identified 27 studies eligible for meta-analysis. The diagnostic accuracy for melanoma was significantly higher with dermoscopy than without this technique (log odds ratio 4.0 [95% CI 3.0 to 5.1] versus 2.7 [1.9 to 3.4]; an improvement of 49%, p = 0.001). The diagnostic accuracy of dermoscopy significantly depended on the degree of experience of the examiners. Dermoscopy by untrained or less experienced examiners was no better than clinical inspection without dermoscopy. The diagnostic performance of dermoscopy improved when the diagnosis was made by a group of examiners in consensus and diminished as the prevalence of melanoma increased. A comparison of various diagnostic algorithms for dermoscopy showed no significant differences in their diagnostic performance. A thorough appraisal of the study characteristics showed that most of the studies were potentially influenced by verification bias. In conclusion, dermoscopy improves the diagnostic accuracy for melanoma in comparison with inspection by the unaided eye, but only for experienced examiners.", "title": "" }, { "docid": "cdf0880d9221e035c5ecf67db75d4b42", "text": "Photoacoustic signals are usually generated using bulky and expensive Q-switched Nd:YAG lasers, with limited scope for varying the pulse repetition frequency, wavelength and pulse width. An alternative would be to use laser diodes as excitation sources; these devices are compact, relatively inexpensive, and available in a wide variety of NIR wavelengths. Their pulse duration and repetition rates can also be varied arbitrarily enabling a wide range of time and frequency domain excitation methods to be employed. The main difficulty to overcome when using laser diodes for pulsed photoacoustic excitation is their low peak power compared to Q-switched lasers. However, the much higher repetition rate of laser diodes (∼ kHz) compared to many Q-switched laser systems (∼ tens of Hz) enables a correspondingly greater number of events to be acquired and signal averaged over a fixed time period. This offers the prospect of significantly increasing the signal-to-noise ratio (SNR) of the detected photoacoustic signal. Choosing the wavelength of the laser diode to be lower than that of the water absorption peak at 940nm, may also provide a significant advantage over a system lasing at 1064nm for measurements in tissue. If the output of a number of laser diodes is combined it then becomes possible, in principle, to obtain a SNR approaching that achievable with a Q-switched laser. It is also suggested that optimising the pulse duration of the laser diode may reduce the effects of frequency-dependent acoustic attenuation in tissue on the photoacoustic signal. To investigate this, a numerical model based on the Poisson solution to the wave equation was developed. To validate the model, a high peak power pulsed laser diode system was built. It was composed of a 905nm stacked array laser diode coupled to an optical fibre and driven by a high current laser diode driver. Measurements of the SNR of photoacoustic signals generated in a purely absorbing medium (ink) were made as a function of pulse duration. This preliminary study shows the potential for using laser diodes as excitation sources for photoacoustic applications in the biomedical field.", "title": "" }, { "docid": "a6cf86ffa90c74b7d7d3254c7d33685a", "text": "Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semistructured data as graphs where nodes correspond to primitives (parts, interest points, and segments) and edges characterize the relationships between these primitives. However, these nonvectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of--explicit/implicit--graph vectorization and embedding. This embedding process should be resilient to intraclass graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have a positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.", "title": "" }, { "docid": "5591247b2e28f436da302757d3f82122", "text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.", "title": "" }, { "docid": "9cc5fddebc5c45c4c7f5535136275076", "text": "This paper details the winning method in the IEEE GOLD category of the PHM psila08 Data Challenge. The task was to estimate the remaining useable life left of an unspecified complex system using a purely data driven approach. The method involves the construction of Multi-Layer Perceptron and Radial Basis Function networks for regression. A suitable selection of these networks has been successfully combined in an ensemble using a Kalman filter. The Kalman filter provides a mechanism for fusing multiple neural network model predictions over time. The essential initial stages of pre-processing and data exploration are also discussed.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "2f1690d7e1ee4aeca5be28faf80917fa", "text": "The millimeter wave (mmWave) bands offer the possibility of orders of magnitude greater throughput for fifth-generation (5G) cellular systems. However, since mmWave signals are highly susceptible to blockage, channel quality on any one mmWave link can be extremely intermittent. This paper implements a novel dual connectivity protocol that enables mobile user equipment devices to maintain physical layer connections to 4G and 5G cells simultaneously. A novel uplink control signaling system combined with a local coordinator enables rapid path switching in the event of failures on any one link. This paper provides the first comprehensive end-to-end evaluation of handover mechanisms in mmWave cellular systems. The simulation framework includes detailed measurement-based channel models to realistically capture spatial dynamics of blocking events, as well as the full details of Medium Access Control, Radio Link Control, and transport protocols. Compared with conventional handover mechanisms, this paper reveals significant benefits of the proposed method under several metrics.", "title": "" }, { "docid": "77df04a0f997f402ae5771db5acda9db", "text": "0198-9715/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.compenvurbsys.2011.05.003 ⇑ Corresponding author. Tel.: +1 212 772 4658; fax E-mail address: [email protected] (H. Gong). 1 Present address: MTA Bus Company, Metropolitan Broadway, New York, NY 10004, United States. Handheld GPS provides a new technology to trace people’s daily travels and has been increasingly used for household travel surveys in major cities worldwide. However, methodologies have not been developed to successfully manage the enormous amount of data generated by GPS, especially in a complex urban environment such as New York City where urban canyon effects are significant and transportation networks are complicated. We develop a GIS algorithm that automatically processes the data from GPSbased travel surveys and detects five travel modes (walk, car, bus, subway, and commuter rail) from a multimodal transportation network in New York City. The mode detection results from the GIS algorithm are checked against the travel diaries from two small handheld GPS surveys. The combined success rate is a promising 82.6% (78.9% for one survey and 86.0% for another). Challenges we encountered in the mode detection process, ways we developed to meet these challenges, as well as possible future improvement to the GPS/GIS method are discussed in the paper, in order to provide a much-needed methodology to process GPS-based travel data for other cities. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "38f6aaf5844ddb6e4ed0665559b7f813", "text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.", "title": "" }, { "docid": "658ff079f4fc59ee402a84beecd77b55", "text": "Mitochondria are master regulators of metabolism. Mitochondria generate ATP by oxidative phosphorylation using pyruvate (derived from glucose and glycolysis) and fatty acids (FAs), both of which are oxidized in the Krebs cycle, as fuel sources. Mitochondria are also an important source of reactive oxygen species (ROS), creating oxidative stress in various contexts, including in the response to bacterial infection. Recently, complex changes in mitochondrial metabolism have been characterized in mouse macrophages in response to varying stimuli in vitro. In LPS and IFN-γ-activated macrophages (M1 macrophages), there is decreased respiration and a broken Krebs cycle, leading to accumulation of succinate and citrate, which act as signals to alter immune function. In IL-4-activated macrophages (M2 macrophages), the Krebs cycle and oxidative phosphorylation are intact and fatty acid oxidation (FAO) is also utilized. These metabolic alterations in response to the nature of the stimulus are proving to be determinants of the effector functions of M1 and M2 macrophages. Furthermore, reprogramming of macrophages from M1 to M2 can be achieved by targeting metabolic events. Here, we describe the role that metabolism plays in macrophage function in infection and immunity, and propose that reprogramming with metabolic inhibitors might be a novel therapeutic approach for the treatment of inflammatory diseases.", "title": "" }, { "docid": "4bfac9df41641b88fb93f382202c6e85", "text": "The objective was to evaluate the clinical efficacy of chemomechanical preparation of the root canals with sodium hypochlorite and interappointment medication with calcium hydroxide in the control of root canal infection and healing of periapical lesions. Fifty teeth diagnosed with chronic apical periodontitis were randomly allocated to one of three treatments: Single visit (SV group, n = 20), calcium hydroxide for one week (CH group n = 18), or leaving the canal empty but sealed for one week (EC group, n = 12). Microbiological samples were taken to monitor the infection during treatment. Periapical healing was controlled radiographically following the change in the periapical index at 52 wk and analyzed using one-way ANOVA. All cases showed microbiological growth in the beginning of the treatment. After mechanical preparation and irrigation with sodium hypochlorite in the first appointment, 20 to 33% of the cases showed growth. At the second appointment 33% of the cases in the CH group revealed bacteria, whereas the EC group showed remarkably more culture positive cases (67%). Sodium hypochlorite was effective also at the second appointment and only two teeth remained culture positive. Only minor differences in periapical healing were observed between the treatment groups. However, bacterial growth at the second appointment had a significant negative impact on healing of the periapical lesion (p < 0.01). The present study indicates good clinical efficacy of sodium hypochlorite irrigation in the control of root canal infection. Calcium hydroxide dressing between the appointments did not show the expected effect in disinfection the root canal system and treatment outcome, indicating the need to develop more efficient inter-appointment dressings.", "title": "" }, { "docid": "4e8a27fd2e56dbc33e315bc9cb462239", "text": "Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.", "title": "" }, { "docid": "0c8c05b492e32407339843badeec4a20", "text": "In contemplating the function and origin of music, a number of scholars have considered whether music might be an evolutionary adaptation. This article reviews the basic arguments related to evolutionary claims for music. Although evolutionary theories about music remain wholly speculative, musical behaviors satisfy a number of basic conditions, which suggests that there is indeed merit in pursuing possible evolutionary accounts.", "title": "" }, { "docid": "07c199affe0b084989b28ff27eb068d8", "text": "Microwave circulator is an important ferrite device which is widely used in wireless transceivers. This paper presents design and simulation of X-Band microstrip circulator. Major application of microstrip junction circulator presented here is as duplexer in RADAR. The circulator designed here is centered at 9.6 GHz with 800 MHz bandwidth. CST Microwave Studio Suite is used as simulation software. Here, the design equations given by Fay and Comstock are followed and prototype of X-band microstrip circulator is prepared. Yttrium iron garnet (YIG) is used as ferrite material. Isolation and return loss of more than 20 dB, and insertion loss of less than 0.1 dB are achieved in simulation results.", "title": "" }, { "docid": "519241b84a8a18cae31a35a291d3bce1", "text": "Recent work in neural machine translation has shown promising performance, but the most effective architectures do not scale naturally to large vocabulary sizes. We propose and compare three variable-length encoding schemes that represent a large vocabulary corpus using a much smaller vocabulary with no loss in information. Common words are unaffected by our encoding, but rare words are encoded using a sequence of two pseudo-words. Our method is simple and effective: it requires no complete dictionaries, learning procedures, increased training time, changes to the model, or new parameters. Compared to a baseline that replaces all rare words with an unknown word symbol, our best variable-length encoding strategy improves WMT English-French translation performance by up to 1.7 BLEU.", "title": "" } ]
scidocsrr
2bf693f9482c65571fdcb5181fa46cd1
Real-time forest fire detection with wireless sensor networks
[ { "docid": "f7b8956748e8c19468490f35ed764e4e", "text": "We show how the database community’s notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data-reduction tool; networking approaches, however, have focused on application specific solutions, whereas our innetwork aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and", "title": "" } ]
[ { "docid": "552a1dae3152fcc2c19a83eb26bc1021", "text": "Several new algorithms for camera-based fall detection have been proposed in the literature recently, with the aim to monitor older people at home so nurses or family members can be warned in case of a fall incident. However, these algorithms are evaluated almost exclusively on data captured in controlled environments, under optimal conditions (simple scenes, perfect illumination and setup of cameras), and with falls simulated by actors. In contrast, we collected a dataset based on real life data, recorded at the place of residence of four older persons over several months. We showed that this poses a significantly harder challenge than the datasets used earlier. The image quality is typically low. Falls are rare and vary a lot both in speed and nature. We investigated the variation in environment parameters and context during the fall incidents. We found that various complicating factors, such as moving furniture or the use of walking aids, are very common yet almost unaddressed in the literature. Under such circumstances and given the large variability of the data in combination with the limited number of examples available to train the system, we posit that simple yet robust methods incorporating, where available, domain knowledge (e.g. the fact that the background is static or that a fall usually involves a downward motion) seem to be most promising. Based on these observations, we propose a new fall detection system. It is based on background subtraction and simple measures extracted from the dominant foreground object such as aspect ratio, fall angle and head speed. We discuss the results obtained, with special emphasis on particular difficulties encountered under real world circumstances.", "title": "" }, { "docid": "2b7d91c38a140628199cbdbee65c008a", "text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.", "title": "" }, { "docid": "a7e1d937d17e46bed14158776785bce8", "text": "Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework.", "title": "" }, { "docid": "3ab831fdb5da974fa56ad412882a4283", "text": "Irregular streaks are important clues for Melanoma (a potentially fatal form of skin cancer) diagnosis using dermoscopy images. This paper extends our previous algorithm to identify the absence or presence of streaks in a skin lesions, by further analyzing the appearance of detected streak lines, and performing a three-way classification for streaks, Absent, Regular, and Irregular, in a pigmented skin lesion. In addition, the directional pattern of detected lines is analyzed to extract their orientation features in order to detect the underlying pattern. The method uses a graphical representation to model the geometric pattern of valid streaks and the distribution and coverage of the structure. Using these proposed features of the valid streaks along with the color and texture features of the entire lesion, an accuracy of 76.1% and weighted average area under ROC curve (AUC) of 85% is achieved for classifying dermoscopy images into streaks Absent, Regular, or Irregular on 945 images compiled from atlases and the internet without any exclusion criteria. This challenging dataset is the largest validation dataset for streaks detection and classification published to date. The data set has also been applied to the two-class sub-problems of Absent/Present classification (accuracy of 78.3% with AUC of 83.2%) and to Regular/Irregular classification (accuracy 83.6% with AUC of 88.9%). When the method was tested on a cleaned subset of 300 images randomly selected from the 945 images, the AUC increased to 91.8%, 93.2% and 90.9% for the Absent/Regular/Irregular, Absent/Present, and Regular/Irregular problems, respectively.", "title": "" }, { "docid": "fb1f467ab11bb4c01a9e410bf84ac258", "text": "The modular arrangement of the neocortex is based on the cell minicolumn: a self-contained ecosystem of neurons and their afferent, efferent, and interneuronal connections. The authors' preliminary studies indicate that minicolumns in the brains of autistic patients are narrower, with an altered internal organization. More specifically, their minicolumns reveal less peripheral neuropil space and increased spacing among their constituent cells. The peripheral neuropil space of the minicolumn is the conduit, among other things, for inhibitory local circuit projections. A defect in these GABAergic fibers may correlate with the increased prevalence of seizures among autistic patients. This article expands on our initial findings by arguing for the specificity of GABAergic inhibition in the neocortex as being focused around its mini- and macrocolumnar organization. The authors conclude that GABAergic interneurons are vital to proper minicolumnar differentiation and signal processing (e.g., filtering capacity of the neocortex), thus providing a putative correlate to autistic symptomatology.", "title": "" }, { "docid": "6cab942e78a957f3217971dd4721e3b2", "text": "(Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics—which has traditionally focused on ethical issues surrounding humans’ use of machines—machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.", "title": "" }, { "docid": "b8f50ba62325ffddcefda7030515fd22", "text": "The following statement is intended to provide an understanding of the governance and legal structure of the University of Sheffield. The University is an independent corporation whose legal status derives from a Royal Charter granted in 1905. It is an educational charity, with exempt status, regulated by the Office for Students in its capacity as Principal Regulator. The University has charitable purposes and applies them for the public benefit. It must comply with the general law of charity. The University’s objectives, powers and governance framework are set out in its Charter and supporting Statutes and Regulations.", "title": "" }, { "docid": "4645d0d7b1dfae80657f75d3751ef72a", "text": "Machine learning approaches are increasingly successful in image-based diagnosis, disease prognosis, and risk assessment. This paper highlights new research directions and discusses three main challenges related to machine learning in medical imaging: coping with variation in imaging protocols, learning from weak labels, and interpretation and evaluation of results.", "title": "" }, { "docid": "c15f36dccebee50056381c41e6ddb2dc", "text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.", "title": "" }, { "docid": "6559d77de48d153153ce77b0e2969793", "text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.", "title": "" }, { "docid": "2c68945d68f8ccf90648bec7fd5b0547", "text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.", "title": "" }, { "docid": "c51e1b845d631e6d1b9328510ef41ea0", "text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.", "title": "" }, { "docid": "af1257e27c0a6010a902e78dc8301df4", "text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.", "title": "" }, { "docid": "2fb6392a161cf64b1fe009dd8db99857", "text": "Humans have an incredible capacity to learn properties of objects by pure tactile exploration with their two hands. With robots moving into human-centred environment, tactile exploration becomes more and more important as vision may be occluded easily by obstacles or fail because of different illumination conditions. In this paper, we present our first results on bimanual compliant tactile exploration, with the goal to identify objects and grasp them. An exploration strategy is proposed to guide the motion of the two arms and fingers along the object. From this tactile exploration, a point cloud is obtained for each object. As the point cloud is intrinsically noisy and un-uniformly distributed, a filter based on Gaussian Processes is proposed to smooth the data. This data is used at runtime for object identification. Experiments on an iCub humanoid robot have been conducted to validate our approach.", "title": "" }, { "docid": "ede8b89c37c10313a84ce0d0d21af8fc", "text": "The adaptive fuzzy and fuzzy neural models are being widely used for identification of dynamic systems. This paper describes different fuzzy logic and neural fuzzy models. The robustness of models has further been checked by Simulink implementation of the models with application to the problem of system identification. The approach is to identify the system by minimizing the cost function using parameters update.", "title": "" }, { "docid": "c3e8960170cb72f711263e7503a56684", "text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.", "title": "" }, { "docid": "088308b06392780058dd8fa1686c5c35", "text": "Every company should be able to demonstrate own efficiency and effectiveness by used metrics or other processes and standards. Businesses may be missing a direct comparison with competitors in the industry, which is only possible using appropriately chosen instruments, whether financial or non-financial. The main purpose of this study is to describe and compare the approaches of the individual authors. to find metric from reviewed studies which organization use to measuring own marketing activities with following separating into financial metrics and non-financial metrics. The paper presents advance in useable metrics, especially financial and non-financial metrics. Selected studies, focusing on different branches and different metrics, were analyzed by the authors. The results of the study is describing relevant metrics to prove efficiency in varied types of organizations in connection with marketing effectiveness. The studies also outline the potential methods for further research focusing on the application of metrics in a diverse environment. The study contributes to a clearer idea of how to measure performance and effectiveness.", "title": "" }, { "docid": "f267030a7ff5a8b4b87b9b5418ec3c28", "text": "Vision systems employing region segmentation by color are crucial in real-time mobile robot applications, such as RoboCup[1], or other domains where interaction with humans or a dynamic world is required. Traditionally, systems employing real-time color-based segmentation are either implemented in hardware, or as very specific software systems that take advantage of domain knowledge to attain the necessary efficiency. However, we have found that with careful attention to algorithm efficiency, fast color image segmentation can be accomplished using commodity image capture and CPU hardware. Our paper describes a system capable of tracking several hundred regions of up to 32 colors at 30 Hertz on general purpose commodity hardware. The software system is composed of four main parts; a novel implementation of a threshold classifier, a merging system to form regions through connected components, a separation and sorting system that gathers various region features, and a top down merging heuristic to approximate perceptual grouping. A key to the efficiency of our approach is a new method for accomplishing color space thresholding that enables a pixel to be classified into one or more of up to 32 colors using only two logical AND operations. A naive approach could require up to 192 comparisons for the same classification. The algorithms and representations are described, as well as descriptions of three applications in which it has been used.", "title": "" }, { "docid": "7b5f0c88eaf8c23b8e2489e140d0022f", "text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.", "title": "" } ]
scidocsrr
badf17a8cc833f40d3bbaceef88d21f9
Dynamic spectrum access in cognitive radio networks with RF energy harvesting
[ { "docid": "10187e22397b1c30b497943764d32c34", "text": "Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.", "title": "" } ]
[ { "docid": "be01fc6b7c89259c1aa06ccbfb6402c3", "text": "Nowadays, automakers have invested in new technologies in order to improve the efficiency of their products. Giant automakers have taken an important step toward achieving this objective by designing continuously variable transmission systems (CVT) to continuously adapt the power of the engine with the external load according to the optimum efficiency curve of engine and reducing fuel consumption; beside, making smooth start up and removing the shock caused by changing the gear ratio and making more pleasurable driving. Considering the specifications of one of Iranian automaker products (the Saipa Pride 131), a CVT with a metal pushing belt and variable pulleys have been designed to replace its current manual transmission system. The necessary parts and components for the CVT have been determined and considering the necessary constraints, its mechanism and components have been designed.", "title": "" }, { "docid": "1039532ef4dfbb7e0d04b25ad99682cb", "text": "Communication of affect across a distance is not well supported by current technology, despite its importance to interpersonal interaction in modern lifestyles. Touch is a powerful conduit for emotional connectedness, and thus mediating haptic (touch) displays have been proposed to address this deficiency; but suitable evaluative methodology has been elusive. In this paper, we offer a first, structured examination of a design space for haptic support of remote affective communication, by analyzing the space and then comparing haptic models designed to manipulate its key dimensions. In our study, dyads (intimate pairs or strangers) are asked to communicate specified emotions using a purely haptic link that consists of virtual models rendered on simple knobs. These models instantiate both interaction metaphors of varying intimacy, and representations of virtual interpersonal distance. Our integrated objective and subjective observations imply that emotion can indeed be communicated through this medium, and confirm that the factors examined influence emotion communication performance as well as preference, comfort and connectedness. The proposed design space and the study results have implications for future efforts to support affective communication using the haptic modality, and the study approach comprises a first model for systematic evaluation of haptically expressed affect. r 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "21daaa29b6ff00af028f3f794b0f04b7", "text": "During the last years, we are experiencing the mushrooming and increased use of web tools enabling Internet users to both create and distribute content (multimedia information). These tools referred to as Web 2.0 technologies-applications can be considered as the tools of mass collaboration, since they empower Internet users to actively participate and simultaneously collaborate with other Internet users for producing, consuming and diffusing the information and knowledge being distributed through the Internet. In other words, Web 2.0 tools do nothing more than realising and exploiting the full potential of the genuine concept and role of the Internet (i.e. the network of the networks that is created and exists for its users). The content and information generated by users of Web 2.0 technologies are having a tremendous impact not only on the profile, expectations and decision making behaviour of Internet users, but also on e-business model that businesses need to develop and/or adapt. The tourism industry is not an exception from such developments. On the contrary, as information is the lifeblood of the tourism industry the use and diffusion of Web 2.0 technologies have a substantial impact of both tourism demand and supply. Indeed, many new types of tourism cyber-intermediaries have been created that are nowadays challenging the e-business model of existing cyberintermediaries that only few years ago have been threatening the existence of intermediaries!. In this vein, the purpose of this article is to analyse the major applications of Web 2.0 technologies in the tourism and hospitality industry by presenting their impact on both demand and supply.", "title": "" }, { "docid": "c7f465088265f34fe798bca8994e98fe", "text": "Purpose – The purpose of this paper is to foster a common understanding of business process management (BPM) by proposing a set of ten principles that characterize BPM as a research domain and guide its successful use in organizational practice. Design/methodology/approach – The identification and discussion of the principles reflects our viewpoint, which was informed by extant literature and focus groups, including 20 BPM experts from academia and practice. Findings – We identify ten principles which represent a set of capabilities essential for mastering contemporary and future challenges in BPM. Their antonyms signify potential roadblocks and bad practices in BPM. We also identify a set of open research questions that can guide future BPM research. Research limitation/implication – Our findings suggest several areas of research regarding each of the identified principles of good BPM. Also, the principles themselves should be systematically and empirically examined in future studies. Practical implications – Our findings allow practitioners to comprehensively scope their BPM initiatives and provide a general guidance for BPM implementation. Moreover, the principles may also serve to tackle contemporary issues in other management areas. Originality/value – This is the first paper that distills principles of BPM in the sense of both good and bad practice recommendations. The value of the principles lies in providing normative advice to practitioners as well as in identifying open research areas for academia, thereby extending the reach and richness of BPM beyond its traditional frontiers.", "title": "" }, { "docid": "bbfa632dc8e262fd30addc3ac97f1501", "text": "Chemical Organization Theory (COT) is a recently developed formalism inspired by chemical reactions. Because of its simplicity, generality and power, COT seems able to tackle a wide variety of problems in the analysis of complex, self-organizing systems across multiple disciplines. The elements of the formalism are resources and reactions, where a reaction (which has the form a + b + ... → c + d +...) maps a combination of resources onto a new combination. The resources on the input side are “consumed” by the reaction, which “produces” the resources on the output side. Thus, a reaction represents an elementary process that transforms resources into new resources. Reaction networks tend to self-organize into invariant subnetworks, called “organizations”, which are attractors of their dynamics. These are characterized by closure (no new resources are added) and self-maintenance (no existing resources are lost). Thus, they provide a simple model of autopoiesis: the organization persistently recreates its own components. Organizations can be more or less resilient in the face of perturbations, depending on properties such as the size of their basin of attraction or the redundancy of their reaction pathways. Concrete applications of organizations can be found in autocatalytic cycles, metabolic or genetic regulatory networks, ecosystems, sustainable development, and social systems.", "title": "" }, { "docid": "0d27f38d701e3ed5e4efcdb2f9043e44", "text": "BACKGROUND\nThe mechanical, rheological, and pharmacological properties of hyaluronic acid (HA) gels differ by their proprietary crosslinking technologies.\n\n\nOBJECTIVE\nTo examine the different properties of a range of HA gels using simple and easily reproducible laboratory tests to better understand their suitability for particular indications.\n\n\nMETHODS AND MATERIALS\nHyaluronic acid gels produced by one of 7 different crosslinking technologies were subjected to tests for cohesivity, resistance to stretch, and microscopic examination. These 7 gels were: non-animal stabilized HA (NASHA® [Restylane®]), 3D Matrix (Surgiderm® 24 XP), cohesive polydensified matrix (CPM® [Belotero® Balance]), interpenetrating network-like (IPN-like [Stylage® M]), Vycross® (Juvéderm Volbella®), optimal balance technology (OBT® [Emervel Classic]), and resilient HA (RHA® [Teosyal Global Action]).\n\n\nRESULTS\nCohesivity varied for the 7 gels, with NASHA being the least cohesive and CPM the most cohesive. The remaining gels could be described as partially cohesive. The resistance to stretch test confirmed the cohesivity findings, with CPM having the greatest resistance. Light microscopy of the 7 gels revealed HA particles of varying size and distribution. CPM was the only gel to have no particles visible at a microscopic level.\n\n\nCONCLUSION\nHyaluronic acid gels are produced with a range of different crosslinking technologies. Simple laboratory tests show how these can influence a gel's behavior, and can help physicians select the optimal product for a specific treatment indication. Versions of this paper have been previously published in French and in Dutch in the Belgian journal Dermatologie Actualité. Micheels P, Sarazin D, Tran C, Salomon D. Un gel d'acide hyaluronique est-il semblable à son concurrent? Derm-Actu. 2015;14:38-43. J Drugs Dermatol. 2016;15(5):600-606..", "title": "" }, { "docid": "586d89b6d45fd49f489f7fb40c87eb3a", "text": "Little research has examined the impacts of enterprise resource planning (ERP) systems implementation on job satisfaction. Based on a 12-month study of 2,794 employees in a telecommunications firm, we found that ERP system implementation moderated the relationships between three job characteristics (skill variety, autonomy, and feedback) and job satisfaction. Our findings highlight the key role that ERP system implementation can have in altering wellestablished relationships in the context of technology-enabled organizational change situations. This work also extends research on technology diffusion by moving beyond a focus on technology-centric outcomes, such as system use, to understanding broader job outcomes. Carol Saunders was the accepting senior editor for this paper.", "title": "" }, { "docid": "e2f57214cd2ec7b109563d60d354a70f", "text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "8a9a4768f10e1d89280753db9bf298cc", "text": "characteristics of method execution that the agent would want to maximize. So a higher quality method is better whereas a lower cost method is usually preferred. If each outcome oi has a quality distribution ((qi,1, pi,1), (qi,2, pi,2), ..., (qi,m, pi,m)),, the probability that it will execute with quality q is computed as [24] , , , ( ) : i i j i j i j P q p p q q     The expected quality E(q) is computed as [24] :", "title": "" }, { "docid": "76070cda75614ae4b1e3fe53703e7a43", "text": "‘Emotion in Motion’ is an experiment designed to understand the emotional reaction of people to a variety of musical excerpts, via self-report questionnaires and the recording of electrodermal response (EDR) and pulse oximetry (HR) signals. The experiment ran for 3 months as part of a public exhibition, having nearly 4000 participants and over 12000 listening samples. This paper presents the methodology used by the authors to approach this research, as well as preliminary results derived from the self-report data and the physiology.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "e8d2fc861fd1b930e65d40f6ce763672", "text": "Despite that burnout presents a serious burden for modern society, there are no diagnostic criteria. Additional difficulty is the differential diagnosis with depression. Consequently, there is a need to dispose of a burnout biomarker. Epigenetic studies suggest that DNA methylation is a possible mediator linking individual response to stress and psychopathology and could be considered as a potential biomarker of stress-related mental disorders. Thus, the aim of this review is to provide an overview of DNA methylation mechanisms in stress, burnout and depression. In addition to state-of-the-art overview, the goal of this review is to provide a scientific base for burnout biomarker research. We performed a systematic literature search and identified 25 pertinent articles. Among these, 15 focused on depression, 7 on chronic stress and only 3 on work stress/burnout. Three epigenome-wide studies were identified and the majority of studies used the candidate-gene approach, assessing 12 different genes. The glucocorticoid receptor gene (NR3C1) displayed different methylation patterns in chronic stress and depression. The serotonin transporter gene (SLC6A4) methylation was similarly affected in stress, depression and burnout. Work-related stress and depressive symptoms were associated with different methylation patterns of the brain derived neurotrophic factor gene (BDNF) in the same human sample. The tyrosine hydroxylase (TH) methylation was correlated with work stress in a single study. Additional, thoroughly designed longitudinal studies are necessary for revealing the cause-effect relationship of work stress, epigenetics and burnout, including its overlap with depression.", "title": "" }, { "docid": "85cc307d55f4d1727e0194890051d34a", "text": "Exploiting linguistic knowledge to infer properties of neologisms C. Paul Cook Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2010 Neologisms, or newly-coined words, pose problems for natural language processing (NLP) systems. Due to the recency of their coinage, neologisms are typically not listed in computational lexicons—dictionary-like resources that many NLP applications depend on. Therefore when a neologism is encountered in a text being processed, the performance of an NLP system will likely suffer due to the missing word-level information. Identifying and documenting the usage of neologisms is also a challenge in lexicography, the making of dictionaries. The traditional approach to these tasks has been to manually read a lot of text. However, due to the vast quantities of text being produced nowadays, particularly in electronic media such as blogs, it is no longer possible to manually analyze it all in search of neologisms. Methods for automatically identifying and inferring syntactic and semantic properties of neologisms would therefore address problems encountered in both natural language processing and lexicography. Because neologisms are typically infrequent due to their recent addition to the language, approaches to automatically learning word-level information relying on statistical distributional information are in many cases inappropriate. Moreover, neologisms occur in many domains and genres, and therefore approaches relying on domain-specific resources are also inappropriate. The hypothesis of this thesis is that knowledge about etymology—including word formation processes and types of semantic change—can be exploited for the acquisition of aspects of the syntax and semantics of neologisms. Evidence supporting this hypothesis is found", "title": "" }, { "docid": "0d3dd8c380f7e9e0f9b7a1b1380ac36e", "text": "This paper describes the design and testing process of low power current sensors using PCB rogowski-coil for high current application. The design and testing process of PCB rogowski-coil transducer and electronic circuit are explained in deeply analyze based on the physical structure. It also reveals the linearity and error rates of PCB rogowski-coil current sensors. Tests carried out using Current Generator with 5000A maximum output rate and the output of PCB rogowski-coil current sensor is 333mV following IEC-60044-8 standard. Finally, some measures are proposed for the performance improvement of PCB rogowski-coil current sensor to meet the requirements of protective relaying system in terms of structural design and testing standards.", "title": "" }, { "docid": "3fe09244c12dc7ce92bdd0fd96380cec", "text": "A novel switching dc-to-dc converter is presented, which has the same general conversion property (increase or decrease of the input dc voltage) as does the conventional buck-boost converter, and which offers through its new optimum topology higher efficiency, lower output voltage ripple, reduced EMI, smaller size and weight, and excellent dynamics response. One of its most significant advantages is that both input and output current are not pulsating but are continuous (essentially dc with small superimposed switching current ripple), this resulting in a close approximation to the ideal physically nonrealizable dc-to-dc transformer. The converter retains the simplest possible structure with the minimum number of components which, when interconnected in its optimum topology, yield the maximum performance. The new converter is extensively experimentally verified, and both the steady state (dc) and the dynamic (ac) theoretical model are correlated well with theexperimental data. both theoretical and experimental comparisons with the conventional buck-boost converter, to which an input filter has been added, demonstrate the significant advantages of the new optimum topology switching dc-to-dc converter.", "title": "" }, { "docid": "16cbc21b3092a5ba0c978f0cf38710ab", "text": "A major challenge to the problem of community question answering is the lexical and semantic gap between the sentence representations. Some solutions to minimize this gap includes the introduction of extra parameters to deep models or augmenting the external handcrafted features. In this paper, we propose a novel attentive recurrent tensor network for solving the lexical and semantic gap in community question answering. We introduce token-level and phrase-level attention strategy that maps input sequences to the output using trainable parameters. Further, we use the tensor parameters to introduce a 3-way interaction between question, answer and external features in vector space. We introduce simplified tensor matrices with L2 regularization that results in smooth optimization during training. The proposed model achieves state-of-the-art performance on the task of answer sentence selection (TrecQA and WikiQA datasets) while outperforming the current state-of-the-art on the tasks of best answer selection (Yahoo! L4) and answer triggering task (WikiQA).", "title": "" }, { "docid": "13aeddf30926dc72c26453d7004f0a5c", "text": "We would like to give robots the ability to secure human safety in human-robot collisions capable of arising in our living and working environments. However, unfortunately, not much attention has been paid to the technologies of human robot symbiosis to date because almost all robots have been designed and constructed on the assumption that the robots are physically separated from humans. A robot with a new concept will be required to deal with human-robot contact. In this article, we propose a passively movable human-friendly robot that consists of an elastic material-covered manipulator, passive compliant trunk, and passively movable base. The compliant trunk is equipped with springs and dampers, and the passively movable base is constrained by friction developed between the contact surface of the base and the ground. During unexpected collisions, the trunk and base passively move in response to the produced collision force. We describe the validity of the movable base and compliant trunk for collision force suppression, and it is demonstrated in several collision situations. KEY WORDS—passive viscoelastic trunk, passive base, collision force suppression, compliance ellipsoid, redundancy", "title": "" }, { "docid": "23754e7c18cde633aeafface87c4a2c9", "text": "Text classification is an important task in many text mining applications. Text data generated from the reviews have been growing tremendously. People are participating largely in internet to give their opinion about various subjects and topics. A branch of text mining that deals with people’s views about a subject is opinion mining, in which the data in the form of reviews is mined in order to analyze their sentiment. This study of people’s opinion is sentiment analysis and is a popular research area in text mining. In this paper, movie reviews are classified for sentiment analysis in weka. There are 2000 movie reviews in a dataset obtained from Cornell university dataset repository. The dataset is preprocessed and various filters have been applied to reduce the feature set. Feature selection methods are widely used for gathering most valuable words for each category in text mining processes. They help to find most distinctive words for each category by calculating some variables on data. The mostly employed methods are Chi-Square, Information Gain, and Gain Ratio. In this study, information gain method was employed because of its simplicity, less computational costs and its efficiency. The effects of reduced feature set have been proved to improve the performance of the classifier. Two popular classifiers namely naïve bayes and svm have been experimented with the movie review dataset. The results show that naïve bayes performs better than svm for classification of movie reviews.", "title": "" } ]
scidocsrr
874e3ee8d4ff284ba979d73d2351c23a
Wireless Sensor Networks: a Survey on Environmental Monitoring
[ { "docid": "ec06587bff3d5c768ab9083bd480a875", "text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.", "title": "" } ]
[ { "docid": "b0eb2048209c7ceeb3c67c2b24693745", "text": "Modeling an ontology is a hard and time-consuming task. Although methodologies are useful for ontologists to create good ontologies, they do not help with the task of evaluating the quality of the ontology to be reused. For these reasons, it is imperative to evaluate the quality of the ontology after constructing it or before reusing it. Few studies usually present only a set of criteria and questions, but no guidelines to evaluate the ontology. The effort to evaluate an ontology is very high as there is a huge dependence on the evaluator’s expertise to understand the criteria and questions in depth. Moreover, the evaluation is still very subjective. This study presents a novel methodology for ontology evaluation, taking into account three fundamental principles: i) it is based on the Goal, Question, Metric approach for empirical evaluation; ii) the goals of the methodologies are based on the roles of knowledge representations combined with specific evaluation criteria; iii) each ontology is evaluated according to the type of ontology. The methodology was empirically evaluated using different ontologists and ontologies of the same domain. The main contributions of this study are: i) defining a step-by-step approach to evaluate the quality of an ontology; ii) proposing an evaluation based on the roles of knowledge representations; iii) the explicit difference of the evaluation according to the type of the ontology iii) a questionnaire to evaluate the ontologies; iv) a statistical model that automatically calculates the quality of the ontologies.", "title": "" }, { "docid": "9404d1fd58dbd1d83c2d503e54ffd040", "text": "This work examines the association between the Big Five personality dimensions, the most relevant demographic factors (sex, age and relationship status), and subjective well-being. A total of 236 nursing professionals completed the NEO Five Factor Inventory (NEO-FFI) and the Affect-Balance Scale (ABS). Regression analysis showed personality as one of the most important correlates of subjective well-being, especially through Extraversion and Neuroticism. There was a positive association between Openness to experience and the positive and negative components of affect. Likewise, the most basic demographic variables (sex, age and relationship status) are found to be differentially associated with the different elements of subjective well-being, and the explanation for these associations is highly likely to be found in the links between demographic variables and personality. In the same way as control of the effect of demographic variables is necessary for isolating the effect of personality on subjective well-being, control of personality should permit more accurate analysis of the role of demographic variables in relation to the subjective well-being construct. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "683ca94061450b83292ffca3fffc66d7", "text": "A sensorless internal temperature monitoring method for induction motors is proposed in this paper. This method can be embedded in standard motor drives, and is based on the stator windings resistance variation with temperature. A small AC signal is injected to the motor, superimposed to the power supply current, in order to measure the stator resistance online. The proposed method has the advantage of requiring a very low-level monitoring signal, hence the motor torque perturbations and additional power losses are negligible. Furthermore, temperature estimations do not depend on the knowledge of any other motor parameter, since the method is not based on a model. This makes the proposed method more robust than model-based methods. Experimental results that validate the proposal are also presented.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "460aa0df99a3e88a752d5f657f1565de", "text": "Recent case studies have suggested that emotion perception and emotional experience of music have independent cognitive processing. We report a patient who showed selective impairment of emotional experience only in listening to music, that is musical anhednia. A 71-year-old right-handed man developed an infarction in the right parietal lobe. He found himself unable to experience emotion in listening to music, even to which he had listened pleasantly before the illness. In neuropsychological assessments, his intellectual, memory, and constructional abilities were normal. Speech audiometry and recognition of environmental sounds were within normal limits. Neuromusicological assessments revealed no abnormality in the perception of elementary components of music, expression and emotion perception of music. Brain MRI identified the infarct lesion in the right inferior parietal lobule. These findings suggest that emotional experience of music could be selectively impaired without any disturbance of other musical, neuropsychological abilities. The right parietal lobe might participate in emotional experience in listening to music.", "title": "" }, { "docid": "4d136b60209ef625c09a15e3e5abb7f7", "text": "Alterations in the bidirectional interactions between the intestine and the nervous system have important roles in the pathogenesis of irritable bowel syndrome (IBS). A body of largely preclinical evidence suggests that the gut microbiota can modulate these interactions. A small and poorly defined role for dysbiosis in the development of IBS symptoms has been established through characterization of altered intestinal microbiota in IBS patients and reported improvement of subjective symptoms after its manipulation with prebiotics, probiotics, or antibiotics. It remains to be determined whether IBS symptoms are caused by alterations in brain signaling from the intestine to the microbiota or primary disruption of the microbiota, and whether they are involved in altered interactions between the brain and intestine during development. We review the potential mechanisms involved in the pathogenesis of IBS in different groups of patients. Studies are needed to better characterize alterations to the intestinal microbiome in large cohorts of well-phenotyped patients, and to correlate intestinal metabolites with specific abnormalities in gut-brain interactions.", "title": "" }, { "docid": "aec82326c1fea34da9935731e4c476f4", "text": "This paper presents a trajectory tracking control design which provides the essential spatial-temporal feedback control capability for fixed-wing unmanned aerial vehicles (UAVs) to execute a time critical mission reliably. In this design, a kinematic trajectory tracking control law and a control gain selection method are developed to allow the control law to be implemented on a fixed-wing UAV based on the platform's dynamic capability. The tracking control design assumes the command references of the heading and airspeed control systems are the accessible control inputs, and it does not impose restrictive model assumptions on the UAV's control systems. The control design is validated using a high-fidelity nonlinear six degrees of freedom (6DOF) model and the reported results suggest that the proposed tracking control design is able to track time-parameterized trajectories stably with robust control performance.", "title": "" }, { "docid": "4b432e49485b57ddb1921478f2917d4b", "text": "Dynamic perturbations of reaching movements are an important technique for studying motor learning and adaptation. Adaptation to non-contacting, velocity-dependent inertial Coriolis forces generated by arm movements during passive body rotation is very rapid, and when complete the Coriolis forces are no longer sensed. Adaptation to velocity-dependent forces delivered by a robotic manipulandum takes longer and the perturbations continue to be perceived even when adaptation is complete. These differences reflect adaptive self-calibration of motor control versus learning the behavior of an external object or 'tool'. Velocity-dependent inertial Coriolis forces also arise in everyday behavior during voluntary turn and reach movements but because of anticipatory feedforward motor compensations do not affect movement accuracy despite being larger than the velocity-dependent forces typically used in experimental studies. Progress has been made in understanding: the common features that determine adaptive responses to velocity-dependent perturbations of jaw and limb movements; the transfer of adaptation to mechanical perturbations across different contact sites on a limb; and the parcellation and separate representation of the static and dynamic components of multiforce perturbations.", "title": "" }, { "docid": "a48915859a7d772ee8515cb106c79ec1", "text": "Mathematical modelling is increasingly used to get insights into the functioning of complex biological networks. In this context, Petri nets (PNs) have recently emerged as a promising tool among the various methods employed for the modelling and analysis of molecular networks. PNs come with a series of extensions, which allow different abstraction levels, from purely qualitative to more complex quantitative models. Noteworthily, each of these models preserves the underlying graph, which depicts the interactions between the biological components. This article intends to present the basics of the approach and to foster the potential role PNs could play in the development of the computational systems biology.", "title": "" }, { "docid": "1925162dafab9fb0522f625782b7e7a3", "text": "Breast cancer is the most frequently diagnosed malignancy and the second leading cause of mortality in women . In the last decade, ultrasound along with digital mammography has come to be regarded as the gold standard for breast cancer diagnosis. Automatically detecting tumors and extracting lesion boundaries in ultrasound images is difficult due to their specular nature and the variance in shape and appearance of sonographic lesions. Past work on automated ultrasonic breast lesion segmentation has not addressed important issues such as shadowing artifacts or dealing with similar tumor like structures in the sonogram. Algorithms that claim to automatically classify ultrasonic breast lesions, rely on manual delineation of the tumor boundaries. In this paper, we present a novel technique to automatically find lesion margins in ultrasound images, by combining intensity and texture with empirical domain specific knowledge along with directional gradient and a deformable shape-based model. The images are first filtered to remove speckle noise and then contrast enhanced to emphasize the tumor regions. For the first time, a mathematical formulation of the empirical rules used by radiologists in detecting ultrasonic breast lesions, popularly known as the \"Stavros Criteria\" is presented in this paper. We have applied this formulation to automatically determine a seed point within the image. Probabilistic classification of image pixels based on intensity and texture is followed by region growing using the automatically determined seed point to obtain an initial segmentation of the lesion. Boundary points are found on the directional gradient of the image. Outliers are removed by a process of recursive refinement. These boundary points are then supplied as an initial estimate to a deformable model. Incorporating empirical domain specific knowledge along with low and high-level knowledge makes it possible to avoid shadowing artifacts and lowers the chance of confusing similar tumor like structures for the lesion. The system was validated on a database of breast sonograms for 42 patients. The average mean boundary error between manual and automated segmentation was 6.6 pixels and the normalized true positive area overlap was 75.1%. The algorithm was found to be robust to 1) variations in system parameters, 2) number of training samples used, and 3) the position of the seed point within the tumor. Running time for segmenting a single sonogram was 18 s on a 1.8-GHz Pentium machine.", "title": "" }, { "docid": "09623c821f05ffb7840702a5869be284", "text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.", "title": "" }, { "docid": "cd1c983fcf0b6225ede1504db701962a", "text": "The method introduced in this paper aims at helping deep learning practitioners faced with an overfit problem. The idea is to replace, in a multi-branch network, the standard summation of parallel branches with a stochastic affine combination. Applied to 3-branch residual networks, shake-shake regularization improves on the best single shot published results on CIFAR-10 and CIFAR100 by reaching test errors of 2.86% and 15.85%. Experiments on architectures without skip connections or Batch Normalization show encouraging results and open the door to a large set of applications. Code is available at https://github.com/xgastaldi/shake-shake.", "title": "" }, { "docid": "2b00f2b02fa07cdd270f9f7a308c52c5", "text": "A noninvasive and easy-operation measurement of the heart rate has great potential in home healthcare. We present a simple and high running efficiency method for measuring heart rate from a video. By only tracking one feature point which is selected from a small ROI (Region of Interest) in the head area, we extract trajectories of this point in both X-axis and Y-axis. After a series of processes including signal filtering, interpolation, the Independent Component Analysis (ICA) is used to obtain a periodic signal, and then the heart rate can be calculated. We evaluated on 10 subjects and compared to a commercial heart rate measuring instrument (YUYUE YE680B) and achieved high degree of agreement. A running time comparison experiment to the previous proposed motion-based method is carried out and the result shows that the time cost is greatly reduced in our method.", "title": "" }, { "docid": "c68668c82d2512cdea187ad7f94d2939", "text": "Traditional personalized video recommendation methods focus on utilizing user profile or user history behaviors to model user interests, which follows a static strategy and fails to capture the swift shift of the short-term interests of users. According to our cross-platform data analysis, the information emergence and propagation is faster in social textual stream-based platforms than that in multimedia sharing platforms at micro user level. Inspired by this, we propose a dynamic user modeling strategy to tackle personalized video recommendation issues in the multimedia sharing platform YouTube, by transferring knowledge from the social textual stream-based platform Twitter. In particular, the cross-platform video recommendation strategy is divided into two steps. (1) Real-time hot topic detection: the hot topics that users are currently following are extracted from users' tweets, which are utilized to obtain the related videos in YouTube. (2) Time-aware video recommendation: for the target user in YouTube, the obtained videos are ranked by considering the user profile in YouTube, time factor, and quality factor to generate the final recommendation list. In this way, the short-term (hot topics) and long-term (user profile) interests of users are jointly considered. Carefully designed experiments have demonstrated the advantages of the proposed method.", "title": "" }, { "docid": "98e557f291de3b305a91e47f59a9ed34", "text": "We propose SfM-Net, a geometry-aware neural network for motion estimation in videos that decomposes frameto-frame pixel motion in terms of scene and object depth, camera motion and 3D object rotations and translations. Given a sequence of frames, SfM-Net predicts depth, segmentation, camera and rigid object motions, converts those into a dense frame-to-frame motion field (optical flow), differentiably warps frames in time to match pixels and back-propagates. The model can be trained with various degrees of supervision: 1) self-supervised by the reprojection photometric error (completely unsupervised), 2) supervised by ego-motion (camera motion), or 3) supervised by depth (e.g., as provided by RGBD sensors). SfMNet extracts meaningful depth estimates and successfully estimates frame-to-frame camera rotations and translations. It often successfully segments the moving objects in the scene, even though such supervision is never provided.", "title": "" }, { "docid": "9d55947637b358c4dc30d7ba49885472", "text": "Deep neural networks have been successfully applied to many text matching tasks, such as paraphrase identification, question answering, and machine translation. Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it. In this paper, we study a state-of-the-art deep matching model, namely MatchPyramid, on the ad-hoc retrieval task. The MatchPyramid model employs a convolutional neural network over the interactions between query and document to produce the matching score. We conducted extensive experiments to study the impact of different pooling sizes, interaction functions and kernel sizes on the retrieval performance. Finally, we show that the MatchPyramid models can significantly outperform several recently introduced deep matching models on the retrieval task, but still cannot compete with the traditional retrieval models, such as BM25 and language models. CCS Concepts •Information systems→ Retrieval models and ranking;", "title": "" }, { "docid": "fa0674b3e79c1573af621276caef9709", "text": "BACKGROUND\nDuring treatment of upper auricular malformations, the author found that patients with cryptotia and patients with solitary helical and/or antihelical adhesion malformations showed the same anatomical finding of cartilage adhesion. The author defined them together as upper auricular adhesion malformations.\n\n\nMETHODS\nBetween March of 1992 and March of 2006, 194 upper auricular adhesion malformations were corrected in 137 patients. All of these cases were retrospectively studied and classified. Of these, 92 malformations in 68 recent patients were corrected with new surgical methods (these were followed up for more than 6 months).\n\n\nRESULTS\nThe group of solitary helical and/or antihelical cartilage malformation patients was classified as group I and the cryptotia group as group II. These two groups were subdivided according to features of cartilage adhesion and classified into seven subgroups. Thirty-two malformations were classified as belonging to group I and 162 malformations to group II. There were 61 patients with bilateral upper auricular adhesion malformations. Nineteen patients (31 percent of the patients with bilateral malformations) showed malformations belonging to both groups I and II on both ears. On postoperative observation in patients corrected with new methods, it was noticed that the following unfavorable results had occurred in 18 upper auricular adhesion malformation cases (20 percent): venous congestion or partial skin necrosis of used flaps, \"pinched antitragus,\" low-set upper auricle, hypertrophic scars, and baldness.\n\n\nCONCLUSIONS\nThe new consideration for, and the singling out of, upper auricular adhesion malformation can lead to better understanding of the groups of upper auricular malformations to which it belongs, the decision for treatment, and, possibly, clarification of the pathophysiology in the future.", "title": "" }, { "docid": "6be148b33b338193ffbde2683ddc8991", "text": "Predicting stock exchange rates is receiving increasing attention and is a vital financial problem as it contributes to the development of effective strategies for stock exchange transactions. The forecasting of stock price movement in general is considered to be a thought-provoking and essential task for financial time series' exploration. In this paper, a Least Absolute Shrinkage and Selection Operator (LASSO) method based on a linear regression model is proposed as a novel method to predict financial market behavior. LASSO method is able to produce sparse solutions and performs very well when the numbers of features are less as compared to the number of observations. Experiments were performed with Goldman Sachs Group Inc. stock to determine the efficiency of the model. The results indicate that the proposed model outperforms the ridge linear regression model.", "title": "" }, { "docid": "87e8b5b75b5e83ebc52579e8bbae04f0", "text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.", "title": "" }, { "docid": "c3b652b561e38a51f1fa40483532e22d", "text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.", "title": "" } ]
scidocsrr
97e28708c02f967a8bdd2747a288984b
Story Generators : Models and Approaches for the Generation of Literary Artefacts
[ { "docid": "4dbbcaf264cc9beda8644fa926932d2e", "text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.", "title": "" } ]
[ { "docid": "24c49ac0ed56f27982cfdad18054e466", "text": "This paper examines two alternative approaches to supporting code scheduling for multiple-instruction-issue processors. One is to provide a set of non-trapping instructions so that the compiler can perform aggressive static code scheduling. The application of this approach to existing commercial architectures typically requires extending the instruction set. The other approach is to support out-of-order execution in the microarchitecture so that the hardware can perform aggressive dynamic code scheduling. This approach usually does not require modifying the instruction set but requires complex hardware support. In this paper, we analyze the performance of the two alternative approaches using a set of important nonnumerical C benchmark programs. A distinguishing feature of the experiment is that the code for the dynamic approach has been optimized and scheduled as much as allowed by the architecture. The hardware is only responsible for the additional reordering that cannot be performed by the compiler. The overall result is that the clynamic and static approaches are comparable in performance. When applied to a four-instruction-issue processor, both methods achieve more than two times speedup over a high performance single-instruction-issue processor. However, the performance of each scheme varies among the benchmark programs. To explain this variation, we have identified the conditions in these programs that make one approach perform better than the other.", "title": "" }, { "docid": "f330cfad6e7815b1b0670217cd09b12e", "text": "In this paper we study the effect of false data injection attacks on state estimation carried over a sensor network monitoring a discrete-time linear time-invariant Gaussian system. The steady state Kalman filter is used to perform state estimation while a failure detector is employed to detect anomalies in the system. An attacker wishes to compromise the integrity of the state estimator by hijacking a subset of sensors and sending altered readings. In order to inject fake sensor measurements without being detected the attacker will need to carefully design his actions to fool the estimator as abnormal sensor measurements would result in an alarm. It is important for a designer to determine the set of all the estimation biases that an attacker can inject into the system without being detected, providing a quantitative measure of the resilience of the system to such attacks. To this end, we will provide an ellipsoidal algorithm to compute its inner and outer approximations of such set. A numerical example is presented to further illustrate the effect of false data injection attack on state estimation.", "title": "" }, { "docid": "f2b552e97cd929d5780fae80223ae179", "text": "Blockchains are distributed data structures that are used to achieve consensus in systems for cryptocurrencies (like Bitcoin) or smart contracts (like Ethereum). Although blockchains gained a lot of popularity recently, there are only few logic-based models for blockchains available. We introduce BCL, a dynamic logic to reason about blockchain updates, and show that BCL is sound and complete with respect to a simple blockchain model.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "5f4a9eb440f896eea3a4205b5be95593", "text": "In this Series paper, we review evidence for interventions to reduce the prevalence and incidence of violence against women and girls. Our reviewed studies cover a broad range of intervention models, and many forms of violence--ie, intimate partner violence, non-partner sexual assault, female genital mutilation, and child marriage. Evidence is highly skewed towards that from studies from high-income countries, with these evaluations mainly focusing on responses to violence. This evidence suggests that women-centred, advocacy, and home-visitation programmes can reduce a woman's risk of further victimisation, with less conclusive evidence for the preventive effect of programmes for perpetrators. In low-income and middle-income countries, there is a greater research focus on violence prevention, with promising evidence on the effect of group training for women and men, community mobilisation interventions, and combined livelihood and training interventions for women. Despite shortcomings in the evidence base, several studies show large effects in programmatic timeframes. Across different forms of violence, effective programmes are commonly participatory, engage multiple stakeholders, support critical discussion about gender relationships and the acceptability of violence, and support greater communication and shared decision making among family members, as well as non-violent behaviour. Further investment in intervention design and assessment is needed to address evidence gaps.", "title": "" }, { "docid": "82e3ea7c86952d3fce88cdcea39a9bdf", "text": "Many efforts have been paid to enhance the security of Android. However, less attention has been given to how to practically adopt the enhancements on off-the-shelf devices. In particular, securing Android devices often requires modifying their write-protected underlying system component files (especially the system libraries) by flashing or rooting devices, which is unacceptable in many realistic cases. In this paper, a novel technique, called reference hijacking, is presented to address the problem. By introducing a specially designed reset procedure, a new execution environment is constructed for the target application, in which the reference to the underlying system libraries will be redirected to the security-enhanced alternatives. The technique can be applicable to both the Dalvik and Android Runtime (ART) environments and to almost all mainstream Android versions (2.x to 5.x). To demonstrate the capability of reference hijacking, we develop three prototype systems, PatchMan, ControlMan, and TaintMan, to enforce specific security enhancements, involving patching vulnerabilities, protecting inter-component communications, and performing dynamic taint analysis for the target application. These three prototypes have been successfully deployed on a number of popular Android devices from different manufacturers, without modifying the underlying system. The evaluation results show that they are effective and do not introduce noticeable overhead. They strongly support that reference hijacking can substantially improve the practicability of many security enhancement efforts for Android.", "title": "" }, { "docid": "fe768628129dd1e7256c57f81c638cdc", "text": "With the wide deployment of face recognition systems in applications from de-duplication to mobile device unlocking, security against face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays and 3D masks of a face. We address the problem of facial spoof detection against print (photo) and replay (photo or video) attacks based on the analysis of image aliasing (e.g., surface reflection, moiré pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is mobile phone unlock, given that growing number of phones have face unlock and mobile payment capabilities. We build a mobile spoof face database (MSU MSF) containing more than 1, 000 subjects, which is, to our knowledge, the largest spoof face database in terms of the number of subjects. Both print and replay attacks are captured using the front and rear cameras of a Nexus 5 phone. We analyze the aliasing of print and replay attacks using (i) different intensity channels (R, G, B and grayscale), (ii) different image regions (entire image, detected face, and facial component between the nose and chin), and (iii) different feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on three public-domain face spoof databases (Idiap Print-Attack and Replay-Attack, and CASIA), and the MSU MSF show that the proposed approach is effective in face spoof detection for both cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants’ show that the proposed approach works very well in real application scenarios.", "title": "" }, { "docid": "36e5888b8da8ab2fe6a66202230a07b0", "text": "In this paper, we propose some new tools to allow machine learning classiiers to cope with time series data. We rst argue that many time-series classiication problems can be solved by detecting and combining local properties or patterns in time series. Then, a technique is proposed to nd patterns which are useful for classiication. These patterns are combined to build interpretable classiication rules. Experiments , carried out on several artiicial and real problems, highlight the interest of the approach both in terms of interpretability and accuracy of the induced classiiers.", "title": "" }, { "docid": "d86eb92d0d9b35b68f42b03c6587cfe3", "text": "Introduction The badminton smash is an essential component of a player’s repertoire and a significant stroke in gaining success as it is the most common winning shot, accounting for 53.9% of winning shots (Tsai and Chang, 1998; Tong and Hong, 2000; Rambely et al., 2005). The speed of the shuttlecock exceeds that of any other racket sport projectile with a maximum shuttle speed of 493 km/h (306 mph) reported in 2013 by Tan Boon Heong. If a player is able to cause the shuttle to travel at a higher velocity and give the opponent less reaction time to the shot, it would be expected that the smash would be a more effective weapon (Kollath, 1996; Sakurai and Ohtsuki, 2000).", "title": "" }, { "docid": "451110458791809898c854991a073119", "text": "This paper considers the problem of face detection in first attempt using haar cascade classifier from images containing simple and complex backgrounds. It is one of the best detector in terms of reliability and speed. Experiments were carried out on standard database i.e. Indian face database (IFD) and Caltech database. All images are frontal face images because side face views are harder to detect with this technique. Opencv 2.4.2 is used to implement the haar cascade classifier. We achieved 100% face detection rate on Indian database containing simple background and 93.24% detection rate on Caltech database containing complex background. Haar cascade classifier provides high accuracy even the images are highly affected by the illumination. The haar cascade classifier has shown superior performance with simple background images.", "title": "" }, { "docid": "3f6b32bdad3a7ef0302db37f1c44569a", "text": "In this paper we propose and analyze a novel method for automatic stock trading which combines technical analysis and the nearest neighbor classification. Our first and foremost objective is to study the feasibility of the practical use of an intelligent prediction system exclusively based on the history of daily stock closing prices and volumes. To this end we propose a technique that consists of a combination of a nearest neighbor classifier and some well known tools of technical analysis, namely, stop loss, stop gain and RSI filter. For assessing the potential use of the proposed method in practice we compared the results obtained to the results that would be obtained by adopting a buy-and-hold strategy. The key performance measure in this comparison was profitability. The proposed method was shown to generate considerable higher profits than buy-and-hold for most of the companies, with few buy operations generated and, consequently, minimizing the risk of market exposure. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e1978b2ba0457f4073f20ce8c064aa82", "text": "BACKGROUND\nHuman seminal fluid contains small exosome-like vesicles called prostasomes. Prostasomes have been reported previously to play an important role in the process of fertilization by boosting survivability and motility of spermatozoa, in addition to modulating acrosomal reactivity. Prostasomes have also been reported to present with sizes varying from 50 to 500 nm and to have multilayered lipid membranes; however, the fine morphology of prostasomes has never been studied in detail.\n\n\nMETHODS\nSucrose gradient-purified prostasomes were visualized by cryo-electron microscopy (EM). Protein composition was studied by trypsin in-gel digestion and liquid chromatography/mass spectrometry.\n\n\nRESULTS\nHere we report for the first time the detailed structure of seminal prostasomes by cryo-EM. There are at least three distinct dominant structural types of vesicles present. In parallel with the structural analysis, we have carried out a detailed proteomic analysis of prostasomes, which led to the identification of 440 proteins. This is nearly triple the number of proteins identified to date for these unique particles and a number of the proteins identified previously were cross-validated in our study.\n\n\nCONCLUSION\nFrom the data reported herein, we hypothesize that the structural heterogeneity of the exosome-like particles in human semen reflects their functional diversity. Our detailed proteomic analysis provided a list of candidate proteins for future structural and functional studies.", "title": "" }, { "docid": "33c06f0ee7d3beb0273a47790f2a84cd", "text": "This study presents the clinical results of a surgical technique that expands a narrow ridge when its orofacial width precludes the placement of dental implants. In 170 people, 329 implants were placed in sites needing ridge enlargement using the endentulous ridge expansion procedure. This technique involves a partial-thickness flap, crestal and vertical intraosseous incisions into the ridge, and buccal displacement of the buccal cortical plate, including a portion of the underiying spongiosa. Implants were placed in the expanded ridge and allowed to heal for 4 to 5 months. When indicated, the implants were exposed during a second-stage surgery to allow visualization of the implant site. Occlusal loading was applied during the following 3 to 5 months by provisional prostheses. The final phase was the placement of the permanent prostheses. The results yielded a success rate of 98.8%.", "title": "" }, { "docid": "e94c6f4f6336fd244f99071b97388b99", "text": "While CubeSats have thus far been used exclusively in Low Earth Orbit (LEO), NASA is now investigating the possibility to deploy CubeSats beyond LEO to carry out scientific experiments in Deep Space. Such CubeSats require a high-gain antenna that fits in a constrained and limited volume. This paper introduces a 42.8 dBi gain deployable Ka-band antenna folding in a 1.5U stowage volume suitable for 3U and 6U class CubeSats.", "title": "" }, { "docid": "e0b1056544c3dc5c3b6f5bc072a72831", "text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.", "title": "" }, { "docid": "a25839666b7e208810979dc93d20f950", "text": "Energy consumption management has become an essential concept in cloud computing. In this paper, we propose a new power aware load balancing, named Bee-MMT (artificial bee colony algorithm-Minimal migration time), to decline power consumption in cloud computing; as a result of this decline, CO2 production and operational cost will be decreased. According to this purpose, an algorithm based on artificial bee colony algorithm (ABC) has been proposed to detect over utilized hosts and then migrate one or more VMs from them to reduce their utilization; following that we detect underutilized hosts and, if it is possible, migrate all VMs which have been allocated to these hosts and then switch them to the sleep mode. However, there is a trade-off between energy consumption and providing high quality of service to the customers. Consequently, we consider SLA Violation as a metric to qualify the QOS that require to satisfy the customers. The results show that the proposed method can achieve greater power consumption saving than other methods like LR-MMT (local regression-Minimal migration time), DVFS (Dynamic Voltage Frequency Scaling), IQR-MMT (Interquartile Range-MMT), MAD-MMT (Median Absolute Deviation) and non-power aware.", "title": "" }, { "docid": "7af9eaf2c3bcac72049a9d4d1e6b3498", "text": "This paper proposes a fast algorithm for integrating connected-component labeling and Euler number computation. Based on graph theory, the Euler number of a binary image in the proposed algorithm is calculated by counting the occurrences of four patterns of the mask for processing foreground pixels in the first scan of a connected-component labeling process, where these four patterns can be found directly without any additional calculation; thus, connected-component labeling and Euler number computation can be integrated more efficiently. Moreover, when computing the Euler number, unlike other conventional algorithms, the proposed algorithm does not need to process background pixels. Experimental results demonstrate that the proposed algorithm is much more efficient than conventional algorithms either for calculating the Euler number alone or simultaneously calculating the Euler number and labeling connected components.", "title": "" }, { "docid": "036526b572707282a50bc218b72e5862", "text": "Linear classification is a useful tool in machine learning and data mining. For some data in a rich dimensional space, the performance (i.e., testing accuracy) of linear classifiers has shown to be close to that of nonlinear classifiers such as kernel methods, but training and testing speed is much faster. Recently, many research works have developed efficient optimization methods to construct linear classifiers and applied them to some large-scale applications. In this paper, we give a comprehensive survey on the recent development of this active research area.", "title": "" }, { "docid": "f1b6ec7abb626b8bd367977348c4421c", "text": "........................................................................................................................ ii Acknowledgement........................................................................................................iii List of acronyms...........................................................................................................xi", "title": "" } ]
scidocsrr
4934c3477bac065a03bb4d5c1be29e0f
Beyond Training and Awareness: From Security Culture to Security Risk Management
[ { "docid": "e59379bc46c4fcf85027a1624425949b", "text": "Information Security Culture includes all socio-cultural measures that support technical security methods, so that information security becomes a natural aspect in the daily activity of every employee. To apply these socio-cultural measures in an effective and efficient way, certain management models and tools are needed. In our research we developed a framework analyzing the security culture of an organization which we then applied in a pre-evaluation survey. This paper is based on the results of this survey. We will develop a management model for creating, changing and maintaining Information Security Culture. This model will then be used to define explicit sociocultural measures, based on the concept of internal marketing.", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" } ]
[ { "docid": "7b737b18ecf21b9da10475ee407a428b", "text": "This paper proposes a flexible and wearable hand exoskeleton which can be used as a computer mouse. The hand exoskeleton is developed based on a new concept of wearable mouse. The wearable mouse, which consists of flexible bend sensor, accelerometer and bluetooth, is designed for comfortable and supple usage. To demonstrate the effectiveness of the proposed wearable mouse, experiments are carried out for mouse operation consisting of click, cursor movement and wireless communication. The experimental results show that our wearable mouse is more accurate than a standard mouse.", "title": "" }, { "docid": "2923d1776422a1f44395f169f0d61995", "text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.", "title": "" }, { "docid": "dfae6cf3df890c8cfba756384c4e88e6", "text": "In this paper, we propose a second order optimization method to learn models where both the dimensionality of the parameter space and the number of training samples is high. In our method, we construct on each iteratio n a Krylov subspace formed by the gradient and an approximation to the Hess ian matrix, and then use a subset of the training data samples to optimize ove r this subspace. As with the Hessian Free (HF) method of [6], the Hessian matrix i s never explicitly constructed, and is computed using a subset of data. In p ractice, as in HF, we typically use a positive definite substitute for the Hessi an matrix such as the Gauss-Newton matrix. We investigate the effectiveness of o ur proposed method on learning the parameters of deep neural networks, and comp are its performance to widely used methods such as stochastic gradient descent, conjugate gradient descent and L-BFGS, and also to HF. Our method leads to faster convergence than either L-BFGS or HF, and generally performs better than either of them in cross-validation accuracy. It is also simpler and more gene ral than HF, as it does not require a positive semi-definite approximation of the He ssian matrix to work well nor the setting of a damping parameter. The chief drawba ck versus HF is the need for memory to store a basis for the Krylov subspace.", "title": "" }, { "docid": "a0184870ca9830bbce30df1615e8bd0d", "text": "Debate on the validity and reliability of scientific methods often arises in the courtroom. When the government (i.e., the prosecution) is the proponent of evidence, the defense is obliged to challenge its admissibility. Regardless, those who seek to use DNA typing methodologies to analyze forensic biological evidence have a responsibility to understand the technology and its applications so a proper foundation(s) for its use can be laid. Mitochondrial DNA (mtDNA), an extranuclear genome, has certain features that make it desirable for forensics, namely, high copy number, lack of recombination, and matrilineal inheritance. mtDNA typing has become routine in forensic biology and is used to analyze old bones, teeth, hair shafts, and other biological samples where nuclear DNA content is low. To evaluate results obtained by sequencing the two hypervariable regions of the control region of the human mtDNA genome, one must consider the genetically related issues of nomenclature, reference population databases, heteroplasmy, paternal leakage, recombination, and, of course, interpretation of results. We describe the approaches, the impact some issues may have on interpretation of mtDNA analyses, and some issues raised in the courtroom.", "title": "" }, { "docid": "a4922f728f50fa06a63b826ed84c9f24", "text": "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this “reality gap”. By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.", "title": "" }, { "docid": "9a5be4452928d80d6be8e8e0267dafa5", "text": "degeneration of the basal layer in the epidermis. In the dermis, perivascular or lichenoid infiltrate and the presence of melanin incontinence were the predominant changes noted. A recently developed lesion tends to show more predominant band-like lymphocytic infiltration and epidermal vacuolization rather than epidermal atrophy. Linear lesions can frequently occur at sites of scratching or trauma in patients with LP as a result of Koebner’s phenomenon, or, as in our case, they may appear spontaneously within the lines of Blaschko on the face. In acquired Blaschko linear inflammatory dermatosis, cutaneous antigenic mosaicism could be responsible for the susceptibility to induce mosaic T-cell responses. Because drugs had not been changed in type or dosage over several years of treatment, and underlying medical diseases had been well controlled, the possibility of drug-related reaction was thought to be low. Considering the clinical features in our patient, and the fact that exposed sites were frequently the first to be involved, it can be suggested that exposure to sunlight (even in a casual dose) may be a kind of stimuli to induce the lesion of LPP in a genetically susceptible patient. Usually the course is chronic and treatments are less effective for follicular LP or LPP than for classical LP. Topical tacrolimus, a member of the immunosuppressive macrolide family that suppresses T-cell activation, has been shown to be effective in the treatment of some mucosal and follicular LP. There is only one article about the successful treatment of LPP with topical tacrolimus. Although they showed over 50% improvement in seven of 13 patients after 4 months of treatment, the authors did not mention any case of complete clearance in their article. Moreover, the other six of the 13 patients did not show improvement in pigmentation. Therefore, in the present case, 1064-nm QSNY with low fluence treatment was chosen for treating pigmentation. The 1064-nm QSNY in nanosecond (ns) domain is strongly absorbed by the finely distributed melanin in dermal pigmented lesions. Moreover, 1064-nm QSNY with low fluence, which in a ‘‘top-hat’’ beam mode can evenly distribute energy density throughout the whole spot, is now widely used when treating darker skin types, because it greatly reduces the risk of epidermal injury and post-therapy dyschromia. In our patient, because of poor response to topical steroid, we started tacrolimus ointment for mainly targeting T cells, and for the treatment of pigmentation, we added QSNY treatment. It suggests that the combination treatment of 1064-nm low fluenced QSNY with topical tacrolimus may be a good therapeutic option for patients with recalcitrant facial LPP in dark-skinned individuals.", "title": "" }, { "docid": "ac8df493a25afe5801a4e29b4a71c28b", "text": "We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. The goal of this task is to find the permutation that recovers the structure of data from shuffled versions of it. In the case of natural images, this task boils down to recovering the original image from patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end, we resort to a continuous approximation of these matrices using doubly-stochastic matrices which we generate from standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of DeepPermNet is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Our results show state-of-the-art performance on the Public Figures and OSR benchmarks for (i) and on the classification and segmentation tasks on the PASCAL VOC dataset for (ii).", "title": "" }, { "docid": "01dbc861c46c26b22cf2322678eb9ab2", "text": "To facilitate computer analysis of visual art, in the form of paintings, we introduce Pandora (Paintings Dataset for Recognizing the Art movement) database, a collection of digitized paintings labelled with respect to the artistic movement. Noting that the set of databases available as benchmarks for evaluation is highly reduced and most existing ones are limited in variability and number of images, we propose a novel large scale dataset of digital paintings. The database consists of more than 7700 images from 12 art movements. Each genre is illustrated by a number of images varying from 250 to nearly 1000. We investigate how local and global features and classification systems are able to recognize the art movement. Our experimental results suggest that accurate recognition is achievable by a combination of various categories.", "title": "" }, { "docid": "5ffcc588301f0f577dfe9621b7420903", "text": "Video summarization and video captioning are considered two separate tasks in existing studies. For longer videos, automatically identifying the important parts of video content and annotating them with captions will enable a richer and more concise condensation of the video. We propose a general neural network configuration that jointly considers two supervisory signals (i.e., an image-based video summary and text-based video captions) in the training phase and generates both a video summary and corresponding captions for a given video in the test phase. Our main idea is that the summary signals can help a video captioning model learn to focus on important frames. On the other hand, caption signals can help a video summarization model to learn better semantic representations. Jointly modeling both the video summarization and the video captioning tasks offers a novel end-to-end solution that generates a captioned video summary enabling users to index and navigate through the highlights in a video. Moreover, our experiments show the joint model can achieve better performance than state-of-the-art approaches in both individual tasks.", "title": "" }, { "docid": "8d180d1b78fd64168c1808468bc8e032", "text": "Even great efforts have been made for decades, the recognition of human activities is still an unmature technology that attracted plenty of people in computer vision. In this paper, a system framework is presented to recognize multiple kinds of activities from videos by an SVM multi-class classifier with a binary tree architecture. The framework is composed of three functionally cascaded modules: (a) detecting and locating people by non-parameter background subtraction approach, (b) extracting various of features such as local ones from the minimum bounding boxes of human blobs in each frames and a newly defined global one, contour coding of the motion energy image (CCMEI), and (c) recognizing activities of people by SVM multi-class classifier whose structure is determined by a clustering process. The thought of hierarchical classification is introduced and multiple SVMs are aggregated to accomplish the recognition of actions. Each SVM in the multi-class classifier is trained separately to achieve its best classification performance by choosing proper features before they are aggregated. Experimental results both on a homebrewed activity data set and the public Schüldt’s data set show the perfect identification performance and high robustness of the system. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0837c9af9b69367a5a6e32b2f72cef0a", "text": "Machine learning techniques are increasingly being used in making relevant predictions and inferences on individual subjects neuroimaging scan data. Previous studies have mostly focused on categorical discrimination of patients and matched healthy controls and more recently, on prediction of individual continuous variables such as clinical scores or age. However, these studies are greatly hampered by the large number of predictor variables (voxels) and low observations (subjects) also known as the curse-of-dimensionality or small-n-large-p problem. As a result, feature reduction techniques such as feature subset selection and dimensionality reduction are used to remove redundant predictor variables and experimental noise, a process which mitigates the curse-of-dimensionality and small-n-large-p effects. Feature reduction is an essential step before training a machine learning model to avoid overfitting and therefore improving model prediction accuracy and generalization ability. In this review, we discuss feature reduction techniques used with machine learning in neuroimaging studies.", "title": "" }, { "docid": "10b851c1d0113549764b80434c4bac5e", "text": "In this paper, a simplified thermal model for variable speed self cooled induction motors is proposed and experimentally verified. The thermal model is based on simple equations that are compared with more complex equations well known in literature. The proposed thermal model allows to predict the over temperature in the main parts of the motor, starting from the measured or the estimated losses in the machine. In the paper the description of the thermal model set up is reported in detail. Finally, the model is used to define the correct power derating for a variable speed PWM induction motor drive.", "title": "" }, { "docid": "7a2e4588826541a1b6d3a493d7601e0c", "text": "Sports analytics in general, and football (soccer in USA) analytics in particular, have evolved in recent years in an amazing way, thanks to automated or semi-automated sensing technologies that provide high-fidelity data streams extracted from every game. In this paper we propose a data-driven approach and show that there is a large potential to boost the understanding of football team performance. From observational data of football games we extract a set of pass-based performance indicators and summarize them in the H indicator. We observe a strong correlation among the proposed indicator and the success of a team, and therefore perform a simulation on the four major European championships (78 teams, almost 1500 games). The outcome of each game in the championship was replaced by a synthetic outcome (win, loss or draw) based on the performance indicators computed for each team. We found that the final rankings in the simulated championships are very close to the actual rankings in the real championships, and show that teams with high ranking error show extreme values of a defense/attack efficiency measure, the Pezzali score. Our results are surprising given the simplicity of the proposed indicators, suggesting that a complex systems' view on football data has the potential of revealing hidden patterns and behavior of superior quality.", "title": "" }, { "docid": "fe1e97ecf8d86f8610635834506942af", "text": "The ad hoc network is a system of network elements that combine to form a network requiring little or no planning. This may not be feasible as nodes can enter and leave the network. In such networks, each node can receive the packet (host) and the packet sender (router) to act. The goal of routing is finding paths that meet the needs of the network and effectively use network resources. This paper presents a method for QoS routing in ad hoc networks based on ant colony optimization (ACO) algorithm and fuzzy logic. The advantages of this method flexibility and routing are based on several criteria. The results show that the proposed method in comparison with the algorithm IACA has better performance, higher efficiency and greater throughput. Therefore, the combination of ant algorithm with Fuzzy Logic due to its simplicity fuzzy computing is appropriate for QoS routing.", "title": "" }, { "docid": "82f18b2c38969f556ff4464ecb99f837", "text": "Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models— plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)—can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models’ ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.", "title": "" }, { "docid": "db7ed2c615bb93c6cec19b65f7b4366d", "text": "Virtual anthropology consists of the introduction of modern slice imaging to biological and forensic anthropology. Thanks to this non-invasive scientific revolution, some classifications and staging systems, first based on dry bone analysis, can be applied to cadavers with no need for specific preparation, as well as to living persons. Estimation of bone and dental age is one of the possibilities offered by radiology. Biological age can be estimated in clinical forensic medicine as well as in living persons. Virtual anthropology may also help the forensic pathologist to estimate a deceased person’s age at death, which together with sex, geographical origin and stature, is one of the important features determining a biological profile used in reconstructive identification. For this forensic purpose, the radiological tools used are multislice computed tomography and, more recently, X-ray free imaging techniques such as magnetic resonance imaging and ultrasound investigations. We present and discuss the value of these investigations for age estimation in anthropology.", "title": "" }, { "docid": "3caa44b574e8db885ad68169fe2446d8", "text": "Dengue is a mosquito-borne fever in the southernmost part of India. It is caused by female mosquitoes, grown in stagnant water .The major symptoms for dengue are fever, bleeding, pain behind eyes, abdominal pain, fatigue, loss of appetite etc., Early diagnosis is the most important, in order to save the human from this deadly disease. Classification techniques helps to predict the disease at an early stage. In this research, Bayes belief network is classification technique is used to predict the probability for various disease occurrence using the probability distribution. Keywords— Prediction, Diagnosis, Bayes belief network, Probability distribution, Classification", "title": "" }, { "docid": "a494d6d9c8919ade3590ed7f6cf44451", "text": "Most algorithms commonly exploited for radar imaging are based on linear models that describe only direct scattering events from the targets in the investigated scene. This assumption is rarely verified in practical scenarios where the objects to be imaged interact with each other and with surrounding environment producing undesired multipath signals. These signals manifest in radar images as “ghosts\" that usually impair the reliable identification of the targets. The recent literature in the field is attempting to provide suitable techniques for multipath suppression from one side and from the other side is focusing on the exploitation of the additional information conveyed by multipath to improve target detection and localization. This work addresses the first problem with a specific focus on multipath ghosts caused by target-to-target interactions. In particular, the study is performed with regard to metallic scatterers by means of the linearized inverse scattering approach based on the physical optics (PO) approximation. A simple model is proposed in the case of point-like targets to gain insight into the ghosts problem so as to devise possible measurement and processing strategies for their mitigation. Finally, the effectiveness of these methods is assessed by reconstruction results obtained from full-wave synthetic data.", "title": "" }, { "docid": "13055a3a35f058eb3622fb60afc436fc", "text": "AIM\nTo investigate the probability of and factors influencing periapical status of teeth following primary (1°RCTx) or secondary (2°RCTx) root canal treatment.\n\n\nMETHODOLOGY\nThis prospective study involved annual clinical and radiographic follow-up of 1°RCTx (1170 roots, 702 teeth and 534 patients) or 2°RCTx (1314 roots, 750 teeth and 559 patients) carried out by Endodontic postgraduate students for 2-4 (50%) years. Pre-, intra- and postoperative data were collected prospectively on customized forms. The proportion of roots with complete periapical healing was estimated, and prognostic factors were investigated using multiple logistic regression models. Clustering effects within patients were adjusted in all models using robust standard error.\n\n\nRESULTS\nproportion of roots with complete periapical healing after 1°RCTx (83%; 95% CI: 81%, 85%) or 2°RCTx (80%; 95% CI: 78%, 82%) were similar. Eleven prognostic factors were identified. The conditions that were found to improve periapical healing significantly were: the preoperative absence of a periapical lesion (P = 0.003); in presence of a periapical lesion, the smaller its size (P ≤ 0.001), the better the treatment prognosis; the absence of a preoperative sinus tract (P = 0.001); achievement of patency at the canal terminus (P = 0.001); extension of canal cleaning as close as possible to its apical terminus (P = 0.001); the use of ethylene-diamine-tetra-acetic acid (EDTA) solution as a penultimate wash followed by final rinse with NaOCl solution in 2°RCTx cases (P = 0.002); abstaining from using 2% chlorexidine as an adjunct irrigant to NaOCl solution (P = 0.01); absence of tooth/root perforation (P = 0.06); absence of interappointment flare-up (pain or swelling) (P =0.002); absence of root-filling extrusion (P ≤ 0.001); and presence of a satisfactory coronal restoration (P ≤ 0.001).\n\n\nCONCLUSIONS\nSuccess based on periapical health associated with roots following 1°RCTx (83%) or 2°RCTx (80%) was similar, with 10 factors having a common effect on both, whilst the 11th factor 'EDTA as an additional irrigant' had different effects on the two treatments.", "title": "" }, { "docid": "47ad04e8c93d39a500ab79a6d25d32f0", "text": "OpenGV is a new C++ library for calibrated realtime 3D geometric vision. It unifies both central and non-central absolute and relative camera pose computation algorithms within a single library. Each problem type comes with minimal and non-minimal closed-form solvers, as well as non-linear iterative optimization and robust sample consensus methods. OpenGV therefore contains an unprecedented level of completeness with regard to calibrated geometric vision algorithms, and it is the first library with a dedicated focus on a unified real-time usage of non-central multi-camera systems, which are increasingly popular in robotics and in the automotive industry. This paper introduces OpenGV's flexible interface and abstraction for multi-camera systems, and outlines the performance of all contained algorithms. It is our hope that the introduction of this open-source platform will motivate people to use it and potentially also include more algorithms, which would further contribute to the general accessibility of geometric vision algorithms, and build a common playground for the fair comparison of different solutions.", "title": "" } ]
scidocsrr
7d4486def24011ceff09fdaa7607c00c
Design and Implementation of Digital dining in Restaurants using Android
[ { "docid": "897efb599e554bf453a7b787c5874d48", "text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.", "title": "" } ]
[ { "docid": "89e9d32e14da1acd74e23f8cecea5d8e", "text": "BACKGROUND\nDespite considerable progress in the treatment of post-traumatic stress disorder (PTSD), a large percentage of individuals remain symptomatic following gold-standard therapies. One route to improving care is examining affective disturbances that involve other emotions beyond fear and threat. A growing body of research has implicated shame in PTSD's development and course, although to date no review of this specific literature exists. This scoping review investigated the link between shame and PTSD and sought to identify research gaps.\n\n\nMETHODS\nA systematic database search of PubMed, PsycInfo, Embase, Cochrane, and CINAHL was conducted to find original quantitative research related to shame and PTSD.\n\n\nRESULTS\nForty-seven studies met inclusion criteria. Review found substantial support for an association between shame and PTSD as well as preliminary evidence suggesting its utility as a treatment target. Several design limitations and under-investigated areas were recognized, including the need for a multimodal assessment of shame and more longitudinal and treatment-focused research.\n\n\nCONCLUSION\nThis review provides crucial synthesis of research to date, highlighting the prominence of shame in PTSD, and its likely relevance in successful treatment outcomes. The present review serves as a guide to future work into this critical area of study.", "title": "" }, { "docid": "6eebe30d2e4f7ae4bc1ffb26287f8054", "text": "Attention mechanisms in sequence to sequence models have shown great ability and wonderful performance in various natural language processing (NLP) tasks, such as sentence embedding, text generation, machine translation, machine reading comprehension, etc. Unfortunately, existing attention mechanisms only learn either high-level or low-level features. In this paper, we think that the lack of hierarchical mechanisms is a bottleneck in improving the performance of the attention mechanisms, and propose a novel Hierarchical Attention Mechanism (Ham) based on the weighted sum of different layers of a multi-level attention. Ham achieves a state-of-the-art BLEU score of 0.26 on Chinese poem generation task and a nearly 6.5% averaged improvement compared with the existing machine reading comprehension models such as BIDAF and Match-LSTM. Furthermore, our experiments and theorems reveal that Ham has greater generalization and representation ability than existing attention mechanisms.", "title": "" }, { "docid": "4cb41f9de259f18cd8fe52d2f04756a6", "text": "The Effects of Lottery Prizes on Winners and their Neighbors: Evidence from the Dutch Postcode Lottery Each week, the Dutch Postcode Lottery (PCL) randomly selects a postal code, and distributes cash and a new BMW to lottery participants in that code. We study the effects of these shocks on lottery winners and their neighbors. Consistent with the life-cycle hypothesis, the effects on winners’ consumption are largely confined to cars and other durables. Consistent with the theory of in-kind transfers, the vast majority of BMW winners liquidate their BMWs. We do, however, detect substantial social effects of lottery winnings: PCL nonparticipants who live next door to winners have significantly higher levels of car consumption than other nonparticipants. JEL Classification: D12, C21", "title": "" }, { "docid": "351ef0cd284fd0f1af8b92dbd51a6e1a", "text": "The continuously increasing efficiency and power density requirement of the AC-DC front-end converter posed a big challenge for today's power factor correction (PFC) circuit design. The multi-channel interleaved PFC is a promising candidate to achieve the goals. In this paper, the multi-channel interleaving impact on the EMI filter design and the output capacitor life time is investigated. By properly choosing the interleaving channel number and the switching frequency, the EMI filter size and cost can be effectively reduced. Further more; multi-channel PFC with asymmetrical interleaving strategy is introduced, and the additional benefit on the EMI filter is identified. At the output side, different interleaving schemes impact on the output capacitor ripple cancellation effect is also investigated and compared.", "title": "" }, { "docid": "026408a6ad888ea0bcf298a23ef77177", "text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.", "title": "" }, { "docid": "4259a2252b1065a011655d9f25498b10", "text": "In this paper we shall prove two results. The first one is of interest in number theory and automorphic forms, while the second is a result in harmonic analysis on p-adic reductive groups. The two results, even though seemingly different, are fairly related by a conjecture of Langlands [13]. To explain the first result let F be a number field and denote by AF its ring of adeles. Given a place v of F, we let Fv denote its completion at v. Let 03C0 be a cusp form on GL2(AF). Write n = Q9v1tv. · For an unramified v, let diag(cxv, 03B2v) denote the diagonal element in GL2(C), the L-group of GL2, attached to 1tv. For a fixed positive integer m, let rm denote the m-th symmetric power representation of the standard representation r, of GL2(C) which is an irreducible (m + 1 )-dimensional representation. Then, for a complex number s, the local Langlands L-function [14] attached to 1tv and rm is", "title": "" }, { "docid": "3e2df9d6ed3cad12fcfda19d62a0b42e", "text": "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "title": "" }, { "docid": "d46916f82e8f6ac8f4f3cb3df1c6875f", "text": "Mobile devices are becoming the prevalent computing platform for most people. TouchDevelop is a new mobile development environment that enables anyone with a Windows Phone to create new apps directly on the smartphone, without a PC or a traditional keyboard. At the core is a new mobile programming language and editor that was designed with the touchscreen as the only input device in mind. Programs written in TouchDevelop can leverage all phone sensors such as GPS, cameras, accelerometer, gyroscope, and stored personal data such as contacts, songs, pictures. Thousands of programs have already been written and published with TouchDevelop.", "title": "" }, { "docid": "c0ef15616ba357cb522b828e03a5298c", "text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.", "title": "" }, { "docid": "c99389ad72e35abb651f9002f6053ab3", "text": "Person re-identification aims to match the images of pedestrians across different camera views from different locations. This is a challenging intelligent video surveillance problem that remains an active area of research due to the need for performance improvement. Person re-identification involves two main steps: feature representation and metric learning. Although the keep it simple and straightforward (KISS) metric learning method for discriminative distance metric learning has been shown to be effective for the person re-identification, the estimation of the inverse of a covariance matrix is unstable and indeed may not exist when the training set is small, resulting in poor performance. Here, we present dual-regularized KISS (DR-KISS) metric learning. By regularizing the two covariance matrices, DR-KISS improves on KISS by reducing overestimation of large eigenvalues of the two estimated covariance matrices and, in doing so, guarantees that the covariance matrix is irreversible. Furthermore, we provide theoretical analyses for supporting the motivations. Specifically, we first prove why the regularization is necessary. Then, we prove that the proposed method is robust for generalization. We conduct extensive experiments on three challenging person re-identification datasets, VIPeR, GRID, and CUHK 01, and show that DR-KISS achieves new state-of-the-art performance.", "title": "" }, { "docid": "67808f54305bc2bb2b3dd666f8b4ef42", "text": "Sensing devices are becoming the source of a large portion of the Web data. To facilitate the integration of sensed data with data from other sources, both sensor stream sources and data are being enriched with semantic descriptions, creating Linked Stream Data. Despite its enormous potential, little has been done to explore Linked Stream Data. One of the main characteristics of such data is its “live” nature, which prohibits existing Linked Data technologies to be applied directly. Moreover, there is currently a lack of tools to facilitate publishing Linked Stream Data and making it available to other applications. To address these issues we have developed the Linked Stream Middleware (LSM), a platform that brings together the live real world sensed data and the Semantic Web. A LSM deployment is available at http://lsm.deri.ie/. It provides many functionalities such as: i) wrappers for real time data collection and publishing; ii) a web interface for data annotation and visualisation; and iii) a SPARQL endpoint for querying unified Linked Stream Data and Linked Data. In this paper we describe the system architecture behind LSM, provide details how Linked Stream Data is generated, and demonstrate the benefits of the platform by showcasing its interface.", "title": "" }, { "docid": "455b2a46ef0a6a032686eaaedf9cacf3", "text": "Recently, taxonomy has attracted much attention. Both automatic construction solutions and human-based computation approaches have been proposed. The automatic methods suffer from the problem of either low precision or low recall and human computation, on the other hand, is not suitable for large scale tasks. Motivated by the shortcomings of both approaches, we present a hybrid framework, which combines the power of machine-based approaches and human computation (the crowd) to construct a more complete and accurate taxonomy. Specifically, our framework consists of two steps: we first construct a complete but noisy taxonomy automatically, then crowd is introduced to adjust the entity positions in the constructed taxonomy. However, the adjustment is challenging as the budget (money) for asking the crowd is often limited. In our work, we formulate the problem of finding the optimal adjustment as an entity selection optimization (ESO) problem, which is proved to be NP-hard. We then propose an exact algorithm and a more efficient approximation algorithm with an approximation ratio of 1/2(1-1/e). We conduct extensive experiments on real datasets, the results show that our hybrid approach largely improves the recall of the taxonomy with little impairment for precision.", "title": "" }, { "docid": "cdb295a5a98da527a244d9b9f490407e", "text": "The Toggle-based <italic>X</italic>-masking method requires a single toggle at a given cycle, there is a chance that non-<italic>X</italic> values are also masked. Hence, the non-<italic>X</italic> value over-masking problem may cause a fault coverage degradation. In this paper, a scan chain partitioning scheme is described to alleviate non-<italic>X </italic> bit over-masking problem arising from Toggle-based <italic>X</italic>-Masking method. The scan chain partitioning method finds a scan chain combination that gives the least toggling conflicts. The experimental results show that the amount of over-masked bits is significantly reduced, and it is further reduced when the proposed method is incorporated with <italic>X</italic>-canceling method. However, as the number of scan chain partitions increases, the control data for decoder increases. To reduce a control data overhead, this paper exploits a Huffman coding based data compression. Assuming two partitions, the size of control bits is even smaller than the conventional <italic>X </italic>-toggling method that uses only one decoder. In addition, selection rules of <italic>X</italic>-bits delivered to <italic>X</italic>-Canceling MISR are also proposed. With the selection rules, a significant test time increase can be prevented.", "title": "" }, { "docid": "9dd3157c4c94c62e2577ace7f6c41629", "text": "BACKGROUND\nThere is a growing concern over the addictiveness of Social Media use. Additional representative indicators of impaired control are needed in order to distinguish presumed social media addiction from normal use.\n\n\nAIMS\n(1) To examine the existence of time distortion during non-social media use tasks that involve social media cues among those who may be considered at-risk for social media addiction. (2) To examine the usefulness of this distortion for at-risk vs. low/no-risk classification.\n\n\nMETHOD\nWe used a task that prevented Facebook use and invoked Facebook reflections (survey on self-control strategies) and subsequently measured estimated vs. actual task completion time. We captured the level of addiction using the Bergen Facebook Addiction Scale in the survey, and we used a common cutoff criterion to classify people as at-risk vs. low/no-risk of Facebook addiction.\n\n\nRESULTS\nThe at-risk group presented significant upward time estimate bias and the low/no-risk group presented significant downward time estimate bias. The bias was positively correlated with Facebook addiction scores. It was efficacious, especially when combined with self-reported estimates of extent of Facebook use, in classifying people to the two categories.\n\n\nCONCLUSIONS\nOur study points to a novel, easy to obtain, and useful marker of at-risk for social media addiction, which may be considered for inclusion in diagnosis tools and procedures.", "title": "" }, { "docid": "187fe997bb78bf60c5aaf935719df867", "text": "Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world.", "title": "" }, { "docid": "28370dc894584f053a5bb029142ad587", "text": "Pharmaceutical parallel trade in the European Union is a large and growing phenomenon, and hope has been expressed that it has the potential to reduce prices paid by health insurance and consumers and substantially to raise overall welfare. In this paper we examine the phenomenon empirically, using data on prices and volumes of individual imported products. We have found that the gains from parallel trade accrue mostly to the distribution chain rather than to health insurance and consumers. This is because in destination countries parallel traded drugs are priced just below originally sourced drugs. We also test to see whether parallel trade has a competition impact on prices in destination countries and find that it does not. Such competition effects as there are in pharmaceuticals come mainly from the presence of generics. Accordingly, instead of a convergence to the bottom in EU pharmaceutical prices, the evidence points at ‘convergence to the top’. This is explained by the fact that drug prices are subjected to regulation in individual countries, and by the limited incentives of purchasers to respond to price differentials.", "title": "" }, { "docid": "c01bb81c729f900ee468dae62738ab09", "text": "The success of convolutional networks in learning problems involving planar signals such as images is due to their ability to exploit the translation symmetry of the data distribution through weight sharing. Many areas of science and egineering deal with signals with other symmetries, such as rotation invariant data on the sphere. Examples include climate and weather science, astrophysics, and chemistry. In this paper we present spherical convolutional networks. These networks use convolutions on the sphere and rotation group, which results in rotational weight sharing and rotation equivariance. Using a synthetic spherical MNIST dataset, we show that spherical convolutional networks are very effective at dealing with rotationally invariant classification problems.", "title": "" }, { "docid": "ccb5a426e9636186d2819f34b5f0d5e8", "text": "MOTIVATION\nThe discovery of regulatory pathways, signal cascades, metabolic processes or disease models requires knowledge on individual relations like e.g. physical or regulatory interactions between genes and proteins. Most interactions mentioned in the free text of biomedical publications are not yet contained in structured databases.\n\n\nRESULTS\nWe developed RelEx, an approach for relation extraction from free text. It is based on natural language preprocessing producing dependency parse trees and applying a small number of simple rules to these trees. We applied RelEx on a comprehensive set of one million MEDLINE abstracts dealing with gene and protein relations and extracted approximately 150,000 relations with an estimated performance of both 80% precision and 80% recall.\n\n\nAVAILABILITY\nThe used natural language preprocessing tools are free for use for academic research. Test sets and relation term lists are available from our website (http://www.bio.ifi.lmu.de/publications/RelEx/).", "title": "" }, { "docid": "b19fb7f7471d3565e79dbaab3572bb4d", "text": "Self-enucleation or oedipism is a specific manifestation of psychiatric illness distinct from the milder forms of self-inflicted ocular injury. In this article, we discuss the previously unreported medical complication of subarachnoid hemorrhage accompanying self-enucleation. The diagnosis was suspected from the patient's history and was confirmed by computed tomographic scan of the head. This complication may be easily missed in the overtly psychotic patient. Specific steps in the medical management of self-enucleation are discussed, and medical complications of self-enucleation are reviewed.", "title": "" }, { "docid": "4446ec55b23ae88192764cffd519afd3", "text": "We present Inferential Power Analysis (IPA), a new class of attacks based on power analysis. An IPA attack has two stages: a profiling stage and a key extraction stage. In the profiling stage, intratrace differencing, averaging, and other statistical operations are performed on a large number of power traces to learn details of the implementation, leading to the location and identification of key bits. In the key extraction stage, the key is obtained from a very few power traces; we have successfully extracted keys from a single trace. Compared to differential power analysis, IPA has the advantages that the attacker does not need either plaintext or ciphertext, and that, in the key extraction stage, a key can be obtained from a small number of traces.", "title": "" } ]
scidocsrr
5e9773e3a54f5eee1e077500ab6c01d5
Value-centric design of the internet-of-things solution for food supply chain: Value creation, sensor portfolio and information fusion
[ { "docid": "7f19a1aa06bb21443992cb5283636d9f", "text": "Traceability is important in the food supply chain to ensure the consumerspsila food safety, especially for the fresh products. In recent years, many solutions which applied various emerging technology have been proposed to improve the traceability of fresh product. However, the traceability system needs to be customized to satisfy different requirements. The system depends on the different product properties and supply chain models. This paper proposed a RFID-enabled traceability system for live fish supply chain. The system architecture is designed according to the specific requirement gathered in the life fish processing. Likewise, it is adaptive for the small and medium enterprises. The RFID tag is put on each live fish and is regarded as the mediator which links the live fish logistic center, retail restaurants and consumers for identification. The sensors controlled by the PLC are used to collect the information in farming as well as the automatic transporting processes. The traceability information is designed to be exchanged and used on a Web-based system for farmers and consumers. The system was implemented and deployed in the live fish logistic center for trial, and the results are valuable for practical reference.", "title": "" } ]
[ { "docid": "db0bb2489a29f23fb49cec395ee7dfa8", "text": "Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real world grasping. This paper proposes a number of innovations that together result in a significant improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.", "title": "" }, { "docid": "c6068a61ba1497d52ada0906a3d36854", "text": "EPIC (Executive Process-Interactive Control) is a cognitive architecture especially suited for modeling human multimodal and multiple-task performance. The EPIC architecture includes peripheral sensory-motor processors surrounding a production-rule cognitive processor, and is being used to construct precise computational models for a variety of HCI situations. Some of these models are briefly illustrated here to demonstrate how EPIC clarifies basic properties of human performance and provides usefully precise accounts of performance speed and accuracy.", "title": "" }, { "docid": "5bce8440413c71257b0951de1da61a7a", "text": "Brain Computer Interface (BCI) is the method of communicating the human brain with an external device. People who are incapable to communicate conventionally due to spinal cord injury are in need of Brain Computer Interface. Brain Computer Interface uses the brain signals to take actions, control, actuate and communicate with the world directly using brain integration with peripheral devices and systems. Brain waves are in necessitating to eradicate noises and to extract the valuable features. Artificial Neural Network (ANN) is a functional pattern classification technique which is trained all the way through the error Back-Propagation algorithm. In this paper in order to classify the mental tasks, the brain signals are trained using neural network and also using Principal Component Analysis with Artificial Neural Network. Principal Component Analysis (PCA) is a dominant tool for analyzing data and finding patterns in it. In Principal Component Analysis, data compression is possible and it projects higher dimensional data to lower dimensional data. By using Principal Component Analysis with Neural Network, the redundant data in the dataset is eliminated first and the obtained data is trained using Neural Network. EEG data for five cognitive tasks from five subjects are taken from the Colorado University database. Pattern classification is applied for the data of all tasks of one subject using Neural Network and also using Principal Component Analysis with Neural Network. Finally it is observed that the correctly classified percentage of data is better in Principal Component Analysis with Neural Network compared to Neural Network alone.", "title": "" }, { "docid": "3fbb2bb37f44cb8f300fd28cdbd8bc06", "text": "The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (Some figures may appear in colour only in the online journal)", "title": "" }, { "docid": "8a55bf5b614d750a7de6ac34dc321b10", "text": "Unsupervised image-to-image translation aims at learning the relationship between samples from two image domains without supervised pair information. The relationship between two domain images can be one-to-one, one-to-many or many-to-many. In this paper, we study the one-to-many unsupervised image translation problem in which an input sample from one domain can correspond to multiple samples in the other domain. To learn the complex relationship between the two domains, we introduce an additional variable to control the variations in our one-to-many mapping. A generative model with an XO-structure, called the XOGAN, is proposed to learn the cross domain relationship among the two domains and the additional variables. Not only can we learn to translate between the two image domains, we can also handle the translated images with additional variations. Experiments are performed on unpaired image generation tasks, including edges-to-objects translation and facial image translation. We show that the proposed XOGAN model can generate plausible images and control variations, such as color and texture, of the generated images. Moreover, while state-of-the-art unpaired image generation algorithms tend to generate images with monotonous colors, XOGAN can generate more diverse results.", "title": "" }, { "docid": "58984ddb8d4c28dc63caa29bc245e259", "text": "OpenCL is an open standard to write parallel applications for heterogeneous computing systems. Since its usage is restricted to a single operating system instance, programmers need to use a mix of OpenCL and MPI to program a heterogeneous cluster. In this paper, we introduce an MPI-OpenCL implementation of the LINPACK benchmark for a cluster with multi-GPU nodes. The LINPACK benchmark is one of the most widely used benchmark applications for evaluating high performance computing systems. Our implementation is based on High Performance LINPACK (HPL) and uses the blocked LU decomposition algorithm. We address that optimizations aimed at reducing the overhead of CPUs are necessary to overcome the performance gap between the CPUs and the multiple GPUs. Our LINPACK implementation achieves 93.69 Tflops (46 percent of the theoretical peak) on the target cluster with 49 nodes, each node containing two eight-core CPUs and four GPUs.", "title": "" }, { "docid": "3fb6cec95fcaa0f8b6c6e4f649591b35", "text": "This paper presents the performance of DSP, image and 3D applications on recent general-purpose microprocessors using streaming SIMD ISA extensions (integer and oating point). The 9 benchmarks benchmark we use for this evaluation have been optimized for DLP and caches use with SIMD extensions and data prefetch. The result of these cumulated optimizations is a speedup that ranges from 1.9 to 7.1. All the benchmarks were originaly computation bound and 7 becomes memory bandwidth bound with the addition of SIMD and data prefetch. Quadrupling the memory bandwidth has no eeect on original kernels but improves the performance of SIMD kernels by 15-55%.", "title": "" }, { "docid": "38d1075285bd11b79f593b4f81427e7f", "text": "O of the few positive effects of the recent financial crisis has been the revival of interest in the short-run macroeconomic effects of government spending and tax changes. Before 2008, the topic of stimulus effects of fiscal policy was a backwater compared to research on monetary policy. One reason for the lack of interest was the belief that the lags in implementing fiscal policy were typically too long to be useful for combating recessions. Perhaps another reason was that central banks sponsored many more conferences than government treasury departments. When the economy fell off the cliff in 2008 and the Fed reached the dreaded “zero lower bound” on interest rates, however, it became abundantly clear that more research was needed. Given the upsurge in research on this topic, we now have many more resources to draw upon when asked “what is the government spending multiplier?” In this essay, I will begin by briefly reviewing what theory has to say about the potential effects. As I will discuss, “the multiplier” is a nebulous concept that depends very much on the type of government spending, its persistence, and how it is financed. I will then go on to review the aggregate empirical evidence for the United States, as well as the cross-locality evidence on multipliers. I will conclude that the U.S. aggregate multiplier for a temporary, deficit-financed increase in government purchases (that enter separately in the utility function and have no direct effect on private sector production functions) is probably between 0.8 and 1.5. Reasonable people can argue, however, that the data do not reject 0.5 or 2.0. Can Government Purchases Stimulate the Economy?", "title": "" }, { "docid": "5654b1a8a127d7aa08487e533cb26f7a", "text": "This paper examines the use of interdisciplinary project co-design, as a mechanism for increasing the capacity of a school, and promoting the growth of teachers' professional practice in an urban high school setting. Changing teaching practices and the professional culture within a school can be extremely difficult. Simply providing resources about novel strategies can be ineffective. In fact, in some school cultures, suggestions for classroom practice change can be received with hostility, being viewed by some teachers as acts questioning their professional competence. This study describes how a strategically chosen task, interdisciplinary project co-design, was used by external consultants as a productive, non-threatening mechanism for instructional improvement, by simultaneously enhancing classroom practices and cultivating the growth of professional school community and organizational practices.", "title": "" }, { "docid": "9c20a64fad54b5416b4716090a2e7c51", "text": "Location-Based Social Networks (LBSNs) enable their users to share with their friends the places they go to and whom they go with. Additionally, they provide users with recommendations for Points of Interest (POI) they have not visited before. This functionality is of great importance for users of LBSNs, as it allows them to discover interesting places in populous cities that are not easy to explore. For this reason, previous research has focused on providing recommendations to LBSN users. Nevertheless, while most existing work focuses on recommendations for individual users, techniques to provide recommendations to groups of users are scarce.\n In this paper, we consider the problem of recommending a list of POIs to a group of users in the areas that the group frequents. Our data consist of activity on Swarm, a social networking app by Foursquare, and our results demonstrate that our proposed Geo-Group-Recommender (GGR), a class of hybrid recommender systems that combine the group geographical preferences using Kernel Density Estimation, category and location features and group check-ins outperform a large number of other recommender systems. Moreover, we find evidence that user preferences differ both in venue category and in location between individual and group activities. We also show that combining individual recommendations using group aggregation strategies is not as good as building a profile for a group. Our experiments show that (GGR) outperforms the baselines in terms of precision and recall at different cutoffs.", "title": "" }, { "docid": "8452091115566adaad8a67154128dff8", "text": "© The Ecological Society of America www.frontiersinecology.org T Millennium Ecosystem Assessment (MA) advanced a powerful vision for the future (MA 2005), and now it is time to deliver. The vision of the MA – and of the prescient ecologists and economists whose work formed its foundation – is a world in which people and institutions appreciate natural systems as vital assets, recognize the central roles these assets play in supporting human well-being, and routinely incorporate their material and intangible values into decision making. This vision is now beginning to take hold, fueled by innovations from around the world – from pioneering local leaders to government bureaucracies, and from traditional cultures to major corporations (eg a new experimental wing of Goldman Sachs; Daily and Ellison 2002; Bhagwat and Rutte 2006; Kareiva and Marvier 2007; Ostrom et al. 2007; Goldman et al. 2008). China, for instance, is investing over 700 billion yuan (about US$102.6 billion) in ecosystem service payments, in the current decade (Liu et al. 2008). The goal of the Natural Capital Project – a partnership between Stanford University, The Nature Conservancy, and World Wildlife Fund (www.naturalcapitalproject.org) – is to help integrate ecosystem services into everyday decision making around the world. This requires turning the valuation of ecosystem services into effective policy and finance mechanisms – a problem that, as yet, no one has solved on a large scale. A key challenge remains: relative to other forms of capital, assets embodied in ecosystems are often poorly understood, rarely monitored, and are undergoing rapid degradation (Heal 2000a; MA 2005; Mäler et al. 2008). The importance of ecosystem services is often recognized only after they have been lost, as was the case following Hurricane Katrina (Chambers et al. 2007). Natural capital, and the ecosystem services that flow from it, are usually undervalued – by governments, businesses, and the public – if indeed they are considered at all (Daily et al. 2000; Balmford et al. 2002; NRC 2005). Two fundamental changes need to occur in order to replicate, scale up, and sustain the pioneering efforts that are currently underway, to give ecosystem services weight in decision making. First, the science of ecosystem services needs to advance rapidly. In promising a return (of services) on investments in nature, the scientific community needs to deliver the knowledge and tools necessary to forecast and quantify this return. To help address this challenge, the Natural Capital Project has developed InVEST (a system for Integrated Valuation of Ecosystem ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES", "title": "" }, { "docid": "7e57c7abcd4bcb79d5f0fe8b6cd9a836", "text": "Among the many viruses that are known to infect the human liver, hepatitis B virus (HBV) and hepatitis C virus (HCV) are unique because of their prodigious capacity to cause persistent infection, cirrhosis, and liver cancer. HBV and HCV are noncytopathic viruses and, thus, immunologically mediated events play an important role in the pathogenesis and outcome of these infections. The adaptive immune response mediates virtually all of the liver disease associated with viral hepatitis. However, it is becoming increasingly clear that antigen-nonspecific inflammatory cells exacerbate cytotoxic T lymphocyte (CTL)-induced immunopathology and that platelets enhance the accumulation of CTLs in the liver. Chronic hepatitis is characterized by an inefficient T cell response unable to completely clear HBV or HCV from the liver, which consequently sustains continuous cycles of low-level cell destruction. Over long periods of time, recurrent immune-mediated liver damage contributes to the development of cirrhosis and hepatocellular carcinoma.", "title": "" }, { "docid": "91757d954a8972df713339a970872251", "text": "Computational histopathology involves CAD for microscopic analysis of stained histopathological slides to study presence, localization or grading of disease. An important stage in a CAD system, stain color normalization, has been broadly studied. The existing approaches are mainly defined in the context of stain deconvolution and template matching. In this paper, we propose a novel approach to this problem by introducing a parametric, fully unsupervised generative model. Our model is based on end-to-end machine learning in the framework of generative adversarial networks. It can learn a nonlinear transformation of a set of latent variables, which are forced to have a prior Dirichlet distribution and control the color of staining hematoxylin and eosin (H&E) images. By replacing the latent variables of a source image with those extracted from a template image in the trained model, it can generate a new color copy of the source image while preserving the important tissue structures resembling the chromatic information of the template image. Our proposed method can instantly be applied to new unseen images, which is different from previous methods that need to compute some statistical properties on input test data. This is potentially problematic when the test sample sizes are limited. Experiments on H&E images from different laboratories show that the proposed model outperforms most state-of-the-art methods.", "title": "" }, { "docid": "da72f2990b3e21c45a92f7b54be1d202", "text": "A low-profile, high-gain, and wideband metasurface (MS)-based filtering antenna with high selectivity is investigated in this communication. The planar MS consists of nonuniform metallic patch cells, and it is fed by two separated microstrip-coupled slots from the bottom. The separation between the two slots together with a shorting via is used to provide good filtering performance in the lower stopband, whereas the MS is elaborately designed to provide a sharp roll-off rate at upper band edge for the filtering function. The MS also simultaneously works as a high-efficient radiator, enhancing the impedance bandwidth and antenna gain of the feeding slots. To verify the design, a prototype operating at 5 GHz has been fabricated and measured. The reflection coefficient, radiation pattern, antenna gain, and efficiency are studied, and reasonable agreement between the measured and simulated results is observed. The prototype with dimensions of 1.3 λ0 × 1.3 λ0 × 0.06 λ0 has a 10-dB impedance bandwidth of 28.4%, an average gain of 8.2 dBi within passband, and an out-of-band suppression level of more than 20 dB within a very wide stop-band.", "title": "" }, { "docid": "2960d6ab540cac17bb37fd4a4645afd0", "text": "This paper proposes a new walking pattern generation method for humanoid robots. The proposed method consists of feedforward control and feedback control for walking pattern generation. The pole placement method as a feedback controller changes the poles of system in order to generate more stable and smoother walking pattern. The advanced pole-zero cancelation by series approximation(PZCSA) as a feedforward controller plays a role of reducing the inherent property of linear inverted pendulum model (LIPM), that is, non-minimum phase property due to an unstable zero of LIPM and tracking efficiently the desired zero moment point (ZMP). The efficiency of the proposed method is verified by three simulations such as arbitrary walking step length, arbitrary walking phase time and sudden change of walking path.", "title": "" }, { "docid": "fd42a330222290652741553f95d361f4", "text": "Neuroanatomy places critical constraints on the functional connectivity of the cerebral cortex. To analyze these constraints we have examined the relationship between structural features of networks (expressed as graphs) and the patterns of functional connectivity to which they give rise when implemented as dynamical systems. We selected among structurally varying graphs using as selective criteria a number of global information-theoretical measures that characterize functional connectivity. We selected graphs separately for increases in measures of entropy (capturing statistical independence of graph elements), integration (capturing their statistical dependence) and complexity (capturing the interplay between their functional segregation and integration). We found that dynamics with high complexity were supported by graphs whose units were organized into densely linked groups that were sparsely and reciprocally interconnected. Connection matrices based on actual neuroanatomical data describing areas and pathways of the macaque visual cortex and the cat cortex showed structural characteristics that coincided best with those of such complex graphs, revealing the presence of distinct but interconnected anatomical groupings of areas. Moreover, when implemented as dynamical systems, these cortical connection matrices generated functional connectivity with high complexity, characterized by the presence of highly coherent functional clusters. We also found that selection of graphs as they responded to input or produced output led to increases in the complexity of their dynamics. We hypothesize that adaptation to rich sensory environments and motor demands requires complex dynamics and that these dynamics are supported by neuroanatomical motifs that are characteristic of the cerebral cortex.", "title": "" }, { "docid": "9cea4c6fc74feb8249ce3d6a76d867a7", "text": "Template matching has been widely used in image processing for visual inspection of complicated, patterned surfaces. The currently existing methods such as golden template matching and normalized cross correlation are very sensitive to the displacement even the object under test is carefully aligned with respect to the template. This paper proposes a dissimilarity measure based on the optical-flow technique for surface defect detection, and aims at light-emitting diode (LED) wafer die inspection. The dissimilarity measure of each pixel derived from the optical flow field does not represent the true translation distance, but is reliable enough to indicate the degree of difference between an image pair. It is well tolerated to misalignment and random product variation. The integral image technique is applied to replace the sum operations in optical flow computation, and speeds up the intensive computation. We also point out the pitfall of the Lucas-Kanade optical flow when it is applied for defect detection, and propose a swapping process to tackle the problem. The experiment on LED wafer dies has shown that the proposed method can achieve a 100% recognition rate based on a test set of 357 die images.", "title": "" }, { "docid": "256b56bf5eb3a99de4b889d8e1eb735b", "text": "This paper presents the design of a single layer, compact, tapered balun with a >20:1 bandwidth and less than λ/17 in length at the lowest frequency of operation. The balun operates from 0.7GHz to over 15GHz. It can provide both impedance transformation as well as a balanced feed for tightly coupled arrays. Its performance is compared with that of a full-length balun operating over the same frequency band. There is a high degree of agreement between the two baluns.", "title": "" }, { "docid": "b06deb6b5b8a1729d1b386bed06789c4", "text": "Identifying regions of interest in an image has long been of great importance in a wide range of tasks, including place recognition. In this letter, we propose a novel attention mechanism with flexible context, which can be incorporated into existing feedforward network architecture to learn image representations for long-term place recognition. In particular, in order to focus on regions that contribute positively to place recognition, we introduce a multiscale context-flexible network to estimate the importance of each spatial region in the feature map. Our model is trained end-to-end for place recognition and can detect regions of interest of arbitrary shape. Extensive experiments have been conducted to verify the effectiveness of our approach and the results demonstrate that our model can achieve consistently better performance than the state of the art on standard benchmark datasets. Finally, we visualize the learned attention maps to generate insights into what attention the network has learned.", "title": "" }, { "docid": "4d7b93ee9c6036c5915dd1166c9ae2f8", "text": "In this paper, we present a developed NS-3 based emulation platform for evaluating and optimizing the performance of the LTE networks. The developed emulation platform is designed to provide real-time measurements. Thus it eliminates the need for the high cost spent on real equipment. The developed platform consists of three main parts, which are video server, video client(s), and NS-3 based simulation environment for LTE network. Using the developed platform, the server streams video clips to the existing clients going through the LTE simulated network. We utilize this setup to evaluate multiple cases such as mobility and handover. Moreover, we use it for evaluating multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH). Keywords-DASH, Emulation, LTE, NS-3, Real-time, RTP, UDP.", "title": "" } ]
scidocsrr
044965d98a98b3f69de5218a3629a2de
Can Natural Language Processing Become Natural Language Coaching?
[ { "docid": "8788f14a2615f3065f4f0656a4a66592", "text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
[ { "docid": "5ce4d44c4796a8fa506acf02074496f8", "text": "Focus and scope The focus of the workshop was applications of logic programming, i.e., application problems, in whole or in part, that are solved by using logic programming languages and systems. A particular theme of interest was to explore the ease of development and maintenance, clarity, performance, and tradeoffs among these features, brought about by programming using a logic paradigm. The goal was to help provide directions for future research advances and application development. Real-world problems increasingly involve complex data and logic, making the use of logic programming more and more beneficial for such complex applications. Despite the diverse areas of application, their common underlying requirements are centered around ease of development and maintenance, clarity, performance, integration with other tools, and tradeoffs among these properties. Better understanding of these important principles will help advance logic programming research and lead to benefits for logic programming applications. The workshop was organized around four main areas of application: Enterprise Software, Control Systems, Intelligent Agents, and Deep Analysis. These general areas included topics such as business intelligence, ontology management, text processing, program analysis, model checking, access control, network programming, resource allocation, system optimization, decision making, and policy administration. The issues proposed for discussion included language features, implementation efficiency, tool support and integration, evaluation methods, as well as teaching and training.", "title": "" }, { "docid": "0af670278702a8680401ceeb421a05f2", "text": "We investigate semisupervised learning (SL) and pool-based active learning (AL) of a classifier for domains with label-scarce (LS) and unknown categories, i.e., defined categories for which there are initially no labeled examples. This scenario manifests, e.g., when a category is rare, or expensive to label. There are several learning issues when there are unknown categories: 1) it is a priori unknown which subset of (possibly many) measured features are needed to discriminate unknown from common classes and 2) label scarcity suggests that overtraining is a concern. Our classifier exploits the inductive bias that an unknown class consists of the subset of the unlabeled pool’s samples that are atypical (relative to the common classes) with respect to certain key (albeit a priori unknown) features and feature interactions. Accordingly, we treat negative log- $p$ -values on raw features as nonnegatively weighted derived feature inputs to our class posterior, with zero weights identifying irrelevant features. Through a hierarchical class posterior, our model accommodates multiple common classes, multiple LS classes, and unknown classes. For learning, we propose a novel semisupervised objective customized for the LS/unknown category scenarios. While several works minimize class decision uncertainty on unlabeled samples, we instead preserve this uncertainty [maximum entropy (maxEnt)] to avoid overtraining. Our experiments on a variety of UCI Machine learning (ML) domains show: 1) the use of $p$ -value features coupled with weight constraints leads to sparse solutions and gives significant improvement over the use of raw features and 2) for LS SL and AL, unlabeled samples are helpful, and should be used to preserve decision uncertainty (maxEnt), rather than to minimize it, especially during the early stages of AL. Our AL system, leveraging a novel sample-selection scheme, discovers unknown classes and discriminates LS classes from common ones, with sparing use of oracle labeling.", "title": "" }, { "docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3", "text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.", "title": "" }, { "docid": "8d0ccf63b21af19cb750eb571fc59ae6", "text": "This paper presents a motor imagery based Brain Computer Interface (BCI) that uses single channel EEG signal from the C3 or C4 electrode placed in the motor area of the head. Time frequency analysis using Short Time Fourier Transform (STFT) is used to compute spectrogram from the EEG data. The STFT is scaled to have gray level values on which Grey Co-occurrence Matrix (GLCM) is computed. Texture descriptors such as correlation, energy, contrast, homogeneity and dissimilarity are calculated from the GLCM matrices. The texture descriptors are used to train a logistic regression classifier which is then used to classify the left and right motor imagery signals. The single-channel motor imagery classification system is tested offline with different subjects. The average offline accuracy is 87.6%. An online BCI system is implemented in openViBE with the single channel classification scheme. The stimuli presentations and feedback are implemented in Python and integrated with the openViBe BCI system.", "title": "" }, { "docid": "57ccd593f1be27463f9e609d700452dd", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sustainable supply chain network design: An optimization-oriented review Majid Eskandarpour, Pierre Dejax, Joe Miemczyk, Olivier Péton", "title": "" }, { "docid": "d2f36cc750703f5bbec2ea3ef4542902", "text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …", "title": "" }, { "docid": "de6f4705f2d0f829c90e69c0f03a6b6f", "text": "This paper investigates the opportunities and challenges in the use of dynamic radio transmit power control for prolonging the lifetime of body-wearable sensor devices used in continuous health monitoring. We first present extensive empirical evidence that the wireless link quality can change rapidly in body area networks, and a fixed transmit power results in either wasted energy (when the link is good) or low reliability (when the link is bad). We quantify the potential gains of dynamic power control in body-worn devices by benchmarking off-line the energy savings achievable for a given level of reliability.We then propose a class of schemes feasible for practical implementation that adapt transmit power in real-time based on feedback information from the receiver. We profile their performance against the offline benchmark, and provide guidelines on how the parameters can be tuned to achieve the desired trade-off between energy savings and reliability within the chosen operating environment. Finally, we implement and profile our scheme on a MicaZ mote based platform, and also report preliminary results from the ultra-low-power integrated healthcare monitoring platform we are developing at Toumaz Technology.", "title": "" }, { "docid": "1350f4e274947881f4562ab6596da6fd", "text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.", "title": "" }, { "docid": "a753be5a5f81ae77bfcb997a2748d723", "text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.", "title": "" }, { "docid": "30941e0bc8575047d1adc8c20983823b", "text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.", "title": "" }, { "docid": "5dda89fbe7f5757588b5dff0e6c2565d", "text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female Ž gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight Ž gures to be more attractive than normal or overweight Ž gures, regardless of WHR. The female Ž gure with the high WHR (0.86) was judged to be more attractive than the Ž gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These Žndings lend stronger support to sociocultural rather than evolutionary hypotheses.", "title": "" }, { "docid": "ba7f157187fec26847c10fa772d71665", "text": "We describe an implementation of the Hopcroft and Tarjan planarity test and em bedding algorithm The program tests the planarity of the input graph and either constructs a combinatorial embedding if the graph is planar or exhibits a Kuratowski subgraph if the graph is non planar", "title": "" }, { "docid": "d8c4e6632f90c3dd864be93db881a382", "text": "Document understanding techniques such as document clustering and multidocument summarization have been receiving much attention recently. Current document clustering methods usually represent the given collection of documents as a document-term matrix and then conduct the clustering process. Although many of these clustering methods can group the documents effectively, it is still hard for people to capture the meaning of the documents since there is no satisfactory interpretation for each document cluster. A straightforward solution is to first cluster the documents and then summarize each document cluster using summarization methods. However, most of the current summarization methods are solely based on the sentence-term matrix and ignore the context dependence of the sentences. As a result, the generated summaries lack guidance from the document clusters. In this article, we propose a new language model to simultaneously cluster and summarize documents by making use of both the document-term and sentence-term matrices. By utilizing the mutual influence of document clustering and summarization, our method makes; (1) a better document clustering method with more meaningful interpretation; and (2) an effective document summarization method with guidance from document clustering. Experimental results on various document datasets show the effectiveness of our proposed method and the high interpretability of the generated summaries.", "title": "" }, { "docid": "a0b862a758c659b62da2114143bf7687", "text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.", "title": "" }, { "docid": "ce74305a30bd322a78b3827921ae7224", "text": "While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.", "title": "" }, { "docid": "22572c36ce1b816ee30ef422cb290dea", "text": "Visual context is important in object recognition and it is still an open problem in computer vision. Along with the advent of deep convolutional neural networks (CNN), using contextual information with such systems starts to receive attention in the literature. At the same time, aerial imagery is gaining momentum. While advances in deep learning make good progress in aerial image analysis, this problem still poses many great challenges. Aerial images are often taken under poor lighting conditions and contain low resolution objects, many times occluded by trees or taller buildings. In this domain, in particular, visual context could be of great help, but there are still very few papers that consider context in aerial image understanding. Here we introduce context as a complementary way of recognizing objects. We propose a dual-stream deep neural network model that processes information along two independent pathways, one for local and another for global visual reasoning. The two are later combined in the final layers of processing. Our model learns to combine local object appearance as well as information from the larger scene at the same time and in a complementary way, such that together they form a powerful classifier. We test our dual-stream network on the task of segmentation of buildings and roads in aerial images and obtain state-of-the-art results on the Massachusetts Buildings Dataset. We also introduce two new datasets, for buildings and road segmentation, respectively, and study the relative importance of local appearance vs. the larger scene, as well as their performance in combination. While our local-global model could also be useful in general recognition tasks, we clearly demonstrate the effectiveness of visual context in conjunction with deep nets for aerial image", "title": "" }, { "docid": "9b3db8c2632ad79dc8e20435a81ef2a1", "text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.", "title": "" }, { "docid": "5ccb3ab32054741928b8b93eea7a9ce2", "text": "A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given.", "title": "" }, { "docid": "2fa356bb47bf482f8585c882ad5d9409", "text": "As an important arithmetic module, the adder plays a key role in determining the speed and power consumption of a digital signal processing (DSP) system. The demands of high speed and power efficiency as well as the fault tolerance nature of some applications have promoted the development of approximate adders. This paper reviews current approximate adder designs and provides a comparative evaluation in terms of both error and circuit characteristics. Simulation results show that the equal segmentation adder (ESA) is the most hardware-efficient design, but it has the lowest accuracy in terms of error rate (ER) and mean relative error distance (MRED). The error-tolerant adder type II (ETAII), the speculative carry select adder (SCSA) and the accuracy-configurable approximate adder (ACAA) are equally accurate (provided that the same parameters are used), however ETATII incurs the lowest power-delay-product (PDP) among them. The almost correct adder (ACA) is the most power consuming scheme with a moderate accuracy. The lower-part-OR adder (LOA) is the slowest, but it is highly efficient in power dissipation.", "title": "" }, { "docid": "0eca851ca495916502788c9931d1c1f3", "text": "Information in various applications is often expressed as character sequences over a finite alphabet (e.g., DNA or protein sequences). In Big Data era, the lengths and sizes of these sequences are growing explosively, leading to grand challenges for the classical NP-hard problem, namely searching for the Multiple Longest Common Subsequences (MLCS) from multiple sequences. In this paper, we first unveil the fact that the state-of-the-art MLCS algorithms are unable to be applied to long and large-scale sequences alignments. To overcome their defects and tackle the longer and large-scale or even big sequences alignments, based on the proposed novel problem-solving model and various strategies, e.g., parallel topological sorting, optimal calculating, reuse of intermediate results, subsection calculation and serialization, etc., we present a novel parallel MLCS algorithm. Exhaustive experiments on the datasets of both synthetic and real-world biological sequences demonstrate that both the time and space of the proposed algorithm are only linear in the number of dominants from aligned sequences, and the proposed algorithm significantly outperforms the state-of-the-art MLCS algorithms, being applicable to longer and large-scale sequences alignments.", "title": "" } ]
scidocsrr
00085f74479e0291c7171f31c1dfec36
Cyclic Prefix-Based Universal Filtered Multicarrier System and Performance Analysis
[ { "docid": "3d85e6ee7867fa453fb0fd33cffcaad8", "text": "Cognitive radio has been an active research area in wireless communications over the past 10 years. TV Digital Switch Over resulted in new regulatory regimes, which offer the first large-scale opportunity for cognitive radio and networks. This article considers the most recent regulatory rules for TV White Space opportunistic usage, and proposes technologies to operate in these bands. It addresses techniques to assess channel vacancy by the cognitive radio, focusing on the two incumbent systems of the TV bands, namely TV stations and wireless microphones. Spectrum-sensing performance is discussed under TV White Space regulation parameters. Then, modulation schemes for the opportunistic radio are discussed, showing the limitations of classical multi-carrier techniques and the advantages of filter bank modulations. In particular, the low adjacent band leakage of filter bank is addressed, and its benefit for spectrum pooling is stressed as a means to offer broadband access through channel aggregation.", "title": "" } ]
[ { "docid": "34401a7e137cffe44f67e6267f29aa57", "text": "Future Point-of-Care (PoC) molecular-level diagnosis requires advanced biosensing systems that can achieve high sensitivity and portability at low power consumption levels, all within a low price-tag for a variety of applications such as in-field medical diagnostics, epidemic disease control, biohazard detection, and forensic analysis. Magnetically labeled biosensors are proposed as a promising candidate to potentially eliminate or augment the optical instruments used by conventional fluorescence-based sensors. However, magnetic biosensors developed thus far require externally generated magnetic biasing fields [1–4] and/or exotic post-fabrication processes [1,2]. This limits the ultimate form-factor of the system, total power consumption, and cost. To address these impediments, we present a low-power scalable frequency-shift magnetic particle biosensor array in bulk CMOS, which provides single-bead detection sensitivity without any (electrical or permanent) external magnets.", "title": "" }, { "docid": "d40a1b72029bdc8e00737ef84fdf5681", "text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.", "title": "" }, { "docid": "e808606994c3fd8eea1b78e8a3e55b8c", "text": "We describe a Japanese-English patent parallel corpus created from the Japanese and US patent data provided for the NTCIR-6 patent retrieval task. The corpus contains about 2 million sentence pairs that were aligned automatically. This is the largest Japanese-English parallel corpus, which will be available to the public after the 7th NTCIR workshop meeting. We estimated that about 97% of the sentence pairs were correct alignments and about 90% of the alignments were adequate translations whose English sentences reflected almost perfectly the contents of the corresponding Japanese sentences.", "title": "" }, { "docid": "d05e4998114dd485a3027f2809277512", "text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.", "title": "" }, { "docid": "d614eb429aa62e7d568acbba8ac7fe68", "text": "Four women, who previously had undergone multiple unsuccessful in vitro fertilisation (IVF) cycles because of failure of implantation of good quality embryos, were identified as having coexisting uterine adenomyosis. Endometrial biopsies showed that adenomyosis was associated with a prominent aggregation of macrophages within the superficial endometrial glands, potentially interfering with embryo implantation. The inactivation of adenomyosis by an ultra-long pituitary downregulation regime promptly resulted in successful pregnancy for all women in this case series.", "title": "" }, { "docid": "76f2d6cd240d2070bfa7f67b03344075", "text": "Objective and automatic sensor systems to monitor ingestive behavior of individuals arise as a potential solution to replace inaccurate method of self-report. This paper presents a simple sensor system and related signal processing and pattern recognition methodologies to detect periods of food intake based on non-invasive monitoring of chewing. A piezoelectric strain gauge sensor was used to capture movement of the lower jaw from 20 volunteers during periods of quiet sitting, talking and food consumption. These signals were segmented into non-overlapping epochs of fixed length and processed to extract a set of 250 time and frequency domain features for each epoch. A forward feature selection procedure was implemented to choose the most relevant features, identifying from 4 to 11 features most critical for food intake detection. Support vector machine classifiers were trained to create food intake detection models. Twenty-fold cross-validation demonstrated per-epoch classification accuracy of 80.98% and a fine time resolution of 30 s. The simplicity of the chewing strain sensor may result in a less intrusive and simpler way to detect food intake. The proposed methodology could lead to the development of a wearable sensor system to assess eating behaviors of individuals.", "title": "" }, { "docid": "f1b48ea0f93578de8bbe083057211753", "text": "Anecdotes from creative eminences suggest that executive control plays an important role in creativity, but scientific evidence is sparse. Invoking the Dual Pathway to Creativity Model, the authors hypothesize that working memory capacity (WMC) relates to creative performance because it enables persistent, focused, and systematic combining of elements and possibilities (persistence). Study 1 indeed showed that under cognitive load, participants performed worse on a creative insight task. Study 2 revealed positive associations between time-on-task and creativity among individuals high but not low in WMC, even after controlling for general intelligence. Study 3 revealed that across trials, semiprofessional cellists performed increasingly more creative improvisations when they had high rather than low WMC. Study 4 showed that WMC predicts original ideation because it allows persistent (rather than flexible) processing. The authors conclude that WMC benefits creativity because it enables the individual to maintain attention focused on the task and prevents undesirable mind wandering.", "title": "" }, { "docid": "44abac09424c717f3a691e4ba2640c1a", "text": "In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases.", "title": "" }, { "docid": "ada7b43edc18b321c57a978d7a3859ae", "text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.", "title": "" }, { "docid": "5c92db9bd23e5081a6a15419aa78abca", "text": "The original k-means algorithm is designed to work primarily on numeric data sets. This prohibits the algorithm from being applied to categorical data clustering, which is an integral part of data mining and has attracted much attention recently. The k-modes algorithm extended the k-means paradigm to cluster categorical data by using a frequency-based method to update the cluster modes versus the k-means fashion of minimizing a numerically valued cost. However, the dissimilarity measure used in k-modes doesn’t consider the relative frequencies of attribute values in each cluster mode, this will result in a weaker intra-cluster similarity by allocating less similar objects to the cluster. In this paper, we present an experimental study on applying a new dissimilarity measure to the k-modes clustering to improve its clustering accuracy. The measure is based on the idea that the similarity between a data object and cluster mode, is directly proportional to the sum of relative frequencies of the common values in mode. Experimental results on real life datasets show that, the modified algorithm is superior to the original kmodes algorithm with respect to clustering accuracy.", "title": "" }, { "docid": "9734cfaecfbd54f968291e9154e2ab3d", "text": "The Modbus protocol and its variants are widely used in industrial control applications, especially for pipeline operations in the oil and gas sector. This paper describes the principal attacks on the Modbus Serial and Modbus TCP protocols and presents the corresponding attack taxonomies. The attacks are summarized according to their threat categories, targets and impact on control system assets. The attack taxonomies facilitate formal risk analysis efforts by clarifying the nature and scope of the security threats on Modbus control systems and networks. Also, they provide insights into potential mitigation strategies and the relative costs and benefits of implementing these strategies. c © 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ab0541d9ec1ea0cf7ad85d685267c142", "text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.", "title": "" }, { "docid": "91d3008dcd6c351d6cc0187c59cad8df", "text": "Peer-to-peer markets such as eBay, Uber, and Airbnb allow small suppliers to compete with traditional providers of goods or services. We view the primary function of these markets as making it easy for buyers to …nd sellers and engage in convenient, trustworthy transactions. We discuss elements of market design that make this possible, including search and matching algorithms, pricing, and reputation systems. We then develop a simple model of how these markets enable entry by small or ‡exible suppliers, and the resulting impact on existing …rms. Finally, we consider the regulation of peer-to-peer markets, and the economic arguments for di¤erent approaches to licensing and certi…cation, data and employment regulation. We appreciate support from the National Science Foundation, the Stanford Institute for Economic Policy Research, the Toulouse Network on Information Technology, and the Alfred P. Sloan Foundation. yEinav and Levin: Department of Economics, Stanford University and NBER. Farronato: Harvard Business School. Email: [email protected], [email protected], [email protected].", "title": "" }, { "docid": "7788cf06b7c9f09013bd15607e11cd79", "text": "Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.", "title": "" }, { "docid": "b0950aaea13e1eaf13a17d64feddf9b0", "text": "In this paper, we describe the development of CiteSpace as an integrated environment for identifying and tracking thematic trends in scientific literature. The goal is to simplify the process of finding not only highly cited clusters of scientific articles, but also pivotal points and trails that are likely to characterize fundamental transitions of a knowledge domain as a whole. The trails of an advancing research field are captured through a sequence of snapshots of its intellectual structure over time in the form of Pathfinder networks. These networks are subsequently merged with a localized pruning algorithm. Pivotal points in the merged network are algorithmically identified and visualized using the betweenness centrality metric. An example of finding clinical evidence associated with reducing risks of heart diseases is included to illustrate how CiteSpace could be used. The contribution of the work is its integration of various change detection algorithms and interactive visualization capabilities to simply users' tasks.", "title": "" }, { "docid": "f87fea9cd76d1545c34f8e813347146e", "text": "In fault detection and isolation, diagnostic test results are commonly used to compute a set of diagnoses, where each diagnosis points at a set of components which might behave abnormally. In distributed systems consisting of multiple control units, the test results in each unit can be used to compute local diagnoses while all test results in the complete system give the global diagnoses. It is an advantage for both repair and fault-tolerant control to have access to the global diagnoses in each unit since these diagnoses represent all test results in all units. However, when the diagnoses, for example, are to be used to repair a unit, only the components that are used by the unit are of interest. The reason for this is that it is only these components that could have caused the abnormal behavior. However, the global diagnoses might include components from the complete system and therefore often include components that are superfluous for the unit. Motivated by this observation, a new type of diagnosis is proposed, namely, the condensed diagnosis. Each unit has a unique set of condensed diagnoses which represents the global diagnoses. The benefit of the condensed diagnoses is that they only include components used by the unit while still representing the global diagnoses. The proposed method is applied to an automotive vehicle, and the results from the application study show the benefit of using condensed diagnoses compared to global diagnoses.", "title": "" }, { "docid": "16156f3f821fe6d65c8a753995f50b18", "text": "Memory over commitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, over commiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy framework for over omitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-over commited system, Ginkgo runs the Day Trader 2.0 and SPEC Web 2009 benchmarks with the same number of virtual machines while saving up to 73% (50% omitting free space) of a physical server's memory while keeping application performance degradation within 7%.", "title": "" }, { "docid": "10baebc8e9a0071cbe73d66ccaec3a50", "text": "In this paper, the switched-capacitor concept is extended to the voltage-doubler discontinuous conduction mode SEPIC rectifier. As a result, a set of single-phase hybrid SEPIC power factor correction rectifiers able to provide lower voltage stress on the semiconductors and/or higher static gain, which can be easily increased with additional switched-capacitor cells, is proposed. Hence, these rectifiers could be employed in applications that require higher output voltage. In addition, the converters provide a high power factor and a reduced total harmonic distortion in the input current. The topology employs a three-state switch, and three different implementations are described, two being bridgeless versions, which can provide gains in relation to efficiency. The structures and the topological states, a theoretical analysis in steady state, a dynamic model for control, and a design example are reported herein. Furthermore, a prototype with specifications of 1000-W output power, 220-V input voltage, 800-V output voltage, and 50-kHz switching frequency was designed in order to verify the theoretical analysis.", "title": "" }, { "docid": "c75095680818ccc7094e4d53815ef475", "text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.", "title": "" }, { "docid": "bd94b129fdb45adf5d31f2b59cf66867", "text": "Systems based on Brain Computer Interface (BCI) have been developed from the past three decades for assisting locked-in state patients. Researchers across the globe are developing new techniques to increase the BCI accuracy. In 1924 Dr. Hans Berger recorded the first EEG signal. The number of experimental measurements of brain activity has been done using human control commands. The main function of BCI is to convert and transmit human intentions into appropriate motion commands for the wheelchairs, robots, devices, and so forth. BCI allows improving the quality of life of disabled patients and letting them interact with their environment. Since the BCI signals are non-stationary, the main challenges in the non-invasive BCI system are to accurately detect and classify the signals. This paper reviews the State of Art of BCI and techniques used for feature extraction and classification using electroencephalogram (EEG) and highlights the need of adaptation concept.", "title": "" } ]
scidocsrr
a21cffa47d0cef6ee67b9ea859eb8b3b
ARF-Predictor: Effective Prediction of Aging-Related Failure Using Entropy
[ { "docid": "cbb03868af15c8b6b661b5550fa3829c", "text": "Since the notion of software aging was introduced thirteen years ago, the interest in this phenomenon has been increasing from both academia and industry. The majority of the research efforts in studying software aging have focused on understanding its effects theoretically and empirically. However, conceptual aspects related to the foundation of this phenomenon have not been covered in the literature. This paper discusses foundational aspects of the software aging phenomenon, introducing new concepts and interconnecting them with the current body of knowledge, in order to compose a base taxonomy for the software aging research. Three real case studies are presented with the purpose of exemplifying many of the concepts discussed.", "title": "" }, { "docid": "1aeeed59a3f10790e2a6d8d8e26ad964", "text": "Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. We present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of mutual-exclusion and order relationships that, once enforced, can prevent the buggy interleaving. CFix then uses static analysis and testing to determine where to insert what synchronization operations to force the desired mutual-exclusion and order relationships, with a best effort to avoid deadlocks and excessive performance losses. CFix also simplifies its own patches by merging fixes for related bugs. Evaluation using four different types of bug detectors and thirteen real-world concurrency-bug cases shows that CFix can successfully patch these cases without causing deadlocks or excessive performance degradation. Patches automatically generated by CFix are of similar quality to those manually written by developers.", "title": "" } ]
[ { "docid": "5f684d374cc52a485d2799c8db07d35b", "text": "Online banking is the newest and least understood delivery channel for retail banking services. Yet, few, if any, studies were reported quantifying the issues relevant to this cutting-edge technology. This paper reports the results of a quantitative study of the perceptions of banks’ executive and IT managers and potential customers with regard to the drivers, development challenges, and expectations of online banking. The findings will be useful for both researchers and practitioners who seek to understand the issues relevant to online banking. # 2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "c09e5f5592caab9a076d92b4f40df760", "text": "Producing a comprehensive overview of the chemical content of biologically-derived material is a major challenge. Apart from ensuring adequate metabolome coverage and issues of instrument dynamic range, mass resolution and sensitivity, there are major technical difficulties associated with data pre-processing and signal identification when attempting large scale, high-throughput experimentation. To address these factors direct infusion or flow infusion electrospray mass spectrometry has been finding utility as a high throughput metabolite fingerprinting tool. With little sample pre-treatment, no chromatography and instrument cycle times of less than 5 min it is feasible to analyse more than 1,000 samples per week. Data pre-processing is limited to aligning extracted mass spectra and mass-intensity matrices are generally ready in a working day for a month’s worth of data mining and hypothesis generation. ESI-MS fingerprinting has remained rather qualitative by nature and as such ion suppression does not generally compromise data information content as originally suggested when the methodology was first introduced. This review will describe how the quality of data has improved through use of nano-flow infusion and mass-windowing approaches, particularly when using high resolution instruments. The increasingly wider availability of robust high accurate mass instruments actually promotes ESI-MS from a merely fingerprinting tool to the ranks of metabolite profiling and combined with MS/MS capabilities of hybrid instruments improved structural information is available concurrently. We summarise current applications in a wide range of fields where ESI-MS fingerprinting has proved to be an excellent tool for “first pass” metabolome analysis of complex biological samples. The final part of the review describes a typical workflow with reference to recently published data to emphasise key aspects of overall experimental design.", "title": "" }, { "docid": "83fba4d122d9c13c4492dfce9c8d8e89", "text": "We propose two metrics to demonstrate the impact integrating human-computer interaction (HCI) activities in software engineering (SE) processes. User experience metric (UXM) is a product metric that measures the subjective and ephemeral notion of the user’s experience with a product. Index of integration (IoI) is a process metric that measures how integrated the HCI activities were with the SE process. Both metrics have an organizational perspective and can be applied to a wide range of products and projects. Attempt was made to keep the metrics light-weight. While the main motivation behind proposing the two metrics was to establish a correlation between them and thereby demonstrate the effectiveness of the process, several other applications are emerging. The two metrics were evaluated with three industry projects and reviewed by four faculty members from a university and modified based on the feedback.", "title": "" }, { "docid": "073f129a34957b19c6d9af96c869b9ab", "text": "The stability of dc microgrids (MGs) depends on the control strategy adopted for each mode of operation. In an islanded operation mode, droop control is the basic method for bus voltage stabilization when there is no communication among the sources. In this paper, it is shown the consequences of droop implementation on the voltage stability of dc power systems, whose loads are active and nonlinear, e.g., constant power loads. The set of parallel sources and their corresponding transmission lines are modeled by an ideal voltage source in series with an equivalent resistance and inductance. This approximate model allows performing a nonlinear stability analysis to predict the system qualitative behavior due to the reduced number of differential equations. Additionally, nonlinear analysis provides analytical stability conditions as a function of the model parameters and it leads to a design guideline to build reliable (MGs) based on safe operating regions.", "title": "" }, { "docid": "8c0e5e48c8827a943f4586b8e75f4f9d", "text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).", "title": "" }, { "docid": "315e6c863c13dd6fa68620d2ffb66e17", "text": "In this paper, an algorithm for approximating the path of a moving autonomous mobile sensor with an unknown position location using Received Signal Strength (RSS) measurements is proposed. Using a Least Squares (LS) estimation method as an input, a Maximum-Likelihood (ML) approach is used to determine the location of the unknown mobile sensor. For the mobile sensor case, as the sensor changes position the characteristics of the RSS measurements also change; therefore the proposed method adapts the RSS measurement model by dynamically changing the pass loss value alpha to aid in position estimation. Secondly, a Recursive Least-Squares (RLS) algorithm is used to estimate the path of a moving mobile sensor using the Maximum-Likelihood position estimation as an input. The performance of the proposed algorithm is evaluated via simulation and it is shown that this method can accurately determine the position of the mobile sensor, and can efficiently track the position of the mobile sensor during motion.", "title": "" }, { "docid": "1e21662f93476663e01f721642c16336", "text": "Inspired by the biological concept of central pattern generators (CPGs), this paper deals with adaptive walking control of biped robots. Using CPGs, a trajectory generator is designed consisting of a center-of-gravity (CoG) trajectory generator and a workspace trajectory modulation process. Entraining with feedback information, the CoG generator can generate adaptive CoG trajectories online and workspace trajectories can be modulated in real time based on the generated adaptive CoG trajectories. A motion engine maps trajectories from workspace to joint space. The proposed control strategy is able to generate adaptive joint control signals online to realize biped adaptive walking. The experimental results using a biped platform NAO confirm the effectiveness of the proposed control strategy.", "title": "" }, { "docid": "9a5f5df096ad76798791e7bebd6f8c93", "text": "Organisational Communication, in today’s organizations has not only become far more complex and varied but has become an important factor for overall organizational functioning and success. The way the organization communicates with its employees is reflected in morale, motivation and performance of the employees. The objective of the present paper is to explore the interrelationship between communication and motivation and its overall impact on employee performance. The paper focuses on the fact that communication in the workplace can take many forms and has a lasting effect on employee motivation. If employees feel that communication from management is effective, it can lead to feelings of job satisfaction, commitment to the organisation and increased trust in the workplace. This study was conducted through a comprehensive review and critical analysis of the research and literature focused upon the objectives of the paper. It also enumerates the results of a study of organizational communication and motivational practices followed at a large manufacturing company, Vanaz Engineers Ltd., based at Pune, to support the hypothesis propounded in the paper.", "title": "" }, { "docid": "237a88ea092d56c6511bb84604e6a7c7", "text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.", "title": "" }, { "docid": "e6a92df6b717a55f86425b0164e9aa3a", "text": "The COmpound Semiconductor Materials On Silicon (COSMOS) program of the U.S. Defense Advanced Research Projects Agency (DARPA) focuses on developing transistor-scale heterogeneous integration processes to intimately combine advanced compound semiconductor (CS) devices with high-density silicon circuits. The technical approaches being explored in this program include high-density micro assembly, monolithic epitaxial growth, and epitaxial layer printing processes. In Phase I of the program, performers successfully demonstrated world-record differential amplifiers through heterogeneous integration of InP HBTs with commercially fabricated CMOS circuits. In the current Phase II, complex wideband, large dynamic range, high-speed digital-to-analog convertors (DACs) are under development based on the above heterogeneous integration approaches. These DAC designs will utilize InP HBTs in the critical high-speed, high-voltage swing circuit blocks and will employ sophisticated in situ digital correction techniques enabled by CMOS transistors. This paper will also discuss the Phase III program plan as well as future directions for heterogeneous integration technology that will benefit mixed signal circuit applications.", "title": "" }, { "docid": "6da5d72c237948b03cc6a818884ff937", "text": "This paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customer’s probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for consumer heterogeneity in a very flexible manner. We allow visits to play very different roles in the purchasing process. For example, some visits are motivated by a planned purchase while others are simply browsing visits. The Conversion Model in this paper has the flexibility to accommodate a number of visit-to-purchase relationships. Finally, consumers’ shopping behavior may evolve over time as a function of past experiences. Thus, the Conversion Model also allows for non-stationarity in behavior. Specifically, our Conversion Model decomposes an individual’s purchasing conversion behavior into a visit effect and a purchasing threshold effect. Each component is allowed to vary across households as well as over time. We then apply this model to the problem of “managing” visitor traffic. By predicting purchasing probabilities for a given visit, the Conversion Model can identify those visits that are likely to result in a purchase. These visits should be re-directed to a server that will provide a better shopping experience while those visitors that are less likely to result in a purchase may be identified as targets for a promotion.", "title": "" }, { "docid": "455b2a46ef0a6a032686eaaedf9cacf3", "text": "Recently, taxonomy has attracted much attention. Both automatic construction solutions and human-based computation approaches have been proposed. The automatic methods suffer from the problem of either low precision or low recall and human computation, on the other hand, is not suitable for large scale tasks. Motivated by the shortcomings of both approaches, we present a hybrid framework, which combines the power of machine-based approaches and human computation (the crowd) to construct a more complete and accurate taxonomy. Specifically, our framework consists of two steps: we first construct a complete but noisy taxonomy automatically, then crowd is introduced to adjust the entity positions in the constructed taxonomy. However, the adjustment is challenging as the budget (money) for asking the crowd is often limited. In our work, we formulate the problem of finding the optimal adjustment as an entity selection optimization (ESO) problem, which is proved to be NP-hard. We then propose an exact algorithm and a more efficient approximation algorithm with an approximation ratio of 1/2(1-1/e). We conduct extensive experiments on real datasets, the results show that our hybrid approach largely improves the recall of the taxonomy with little impairment for precision.", "title": "" }, { "docid": "827396df94e0bca08cee7e4d673044ef", "text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.", "title": "" }, { "docid": "9756d72cfbb35d9a532f922e3eaccc8c", "text": "Conceived in the early 1990s, Experience Replay (ER) has been shown to be a successful mechanism to allow online learning algorithms to reuse past experiences. Traditionally, ER can be applied to all machine learning paradigms (i.e., unsupervised, supervised, and reinforcement learning). Recently, ER has contributed to improving the performance of deep reinforcement learning. Yet, its application to many practical settings is still limited by the memory requirements of ER, necessary to explicitly store previous observations. To remedy this issue, we explore a novel approach, Online Contrastive Divergence with Generative Replay (OCDGR), which uses the generative capability of Restricted Boltzmann Machines (RBMs) instead of recorded past experiences. The RBM is trained online, and does not require the system to store any of the observed data points. We compare OCDGR to ER on 9 real-world datasets, considering a worst-case scenario (data points arriving in sorted order) as well as a more realistic one (sequential random-order data points). Our results show that in 64.28% of the cases OCDGR outperforms ER and in the remaining 35.72% it has an almost equal performance, while having a considerably reduced space complexity (i.e., memory usage) at a comparable time complexity.", "title": "" }, { "docid": "acc700d965586f5ea65bdcb67af38fca", "text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.", "title": "" }, { "docid": "91eaef6e482601533656ca4786b7a023", "text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.", "title": "" }, { "docid": "e144d8c0f046ad6cd2e5c71844b2b532", "text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.", "title": "" }, { "docid": "58bc5fb67cfb5e4b623b724cb4283a17", "text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.", "title": "" }, { "docid": "cd6e01015c90b61cff1e5492a666a0e2", "text": "The ubiquitin proteasome pathway (UPP) is essential for removing abnormal proteins and preventing accumulation of potentially toxic proteins within the neuron. UPP dysfunction occurs with normal aging and is associated with abnormal accumulation of protein aggregates within neurons in neurodegenerative diseases. Ischemia disrupts UPP function and thus may contribute to UPP dysfunction seen in the aging brain and in neurodegenerative diseases. Ubiquitin carboxy-terminal hydrolase L1 (UCHL1), an important component of the UPP in the neuron, is covalently modified and its activity inhibited by reactive lipids produced after ischemia. As a result, degradation of toxic proteins is impaired which may exacerbate neuronal function and cell death in stroke and neurodegenerative diseases. Preserving or restoring UCHL1 activity may be an effective therapeutic strategy in stroke and neurodegenerative diseases.", "title": "" }, { "docid": "fb8fbcb1d2121f64e80e0e0236d7c29d", "text": "This paper explores an incremental training strategy for the skip-gram model with negative sampling (SGNS) from both empirical and theoretical perspectives. Existing methods of neural word embeddings, including SGNS, are multi-pass algorithms and thus cannot perform incremental model update. To address this problem, we present a simple incremental extension of SGNS and provide a thorough theoretical analysis to demonstrate its validity. Empirical experiments demonstrated the correctness of the theoretical analysis as well as the practical usefulness of the incremental algorithm.", "title": "" } ]
scidocsrr
9da8b3061320759d95fe2419f31e617a
A survey on named data networking
[ { "docid": "a5abd5f11b83afdccbdfc190b8351b07", "text": "Named Data Networking (NDN) is a recently proposed general- purpose network architecture that leverages the strengths of Internet architecture while aiming to address its weaknesses. NDN names packets rather than end-hosts, and most of NDN's characteristics are a consequence of this fact. In this paper, we focus on the packet forwarding model of NDN. Each packet has a unique name which is used to make forwarding decisions in the network. NDN forwarding differs substantially from that in IP; namely, NDN forwards based on variable-length names and has a read-write data plane. Designing and evaluating a scalable NDN forwarding node architecture is a major effort within the overall NDN research agenda. In this paper, we present the concepts, issues and principles of scalable NDN forwarding plane design. The essential function of NDN forwarding plane is fast name lookup. By studying the performance of the NDN reference implementation, known as CCNx, and simplifying its forwarding structure, we identify three key issues in the design of a scalable NDN forwarding plane: 1) exact string matching with fast updates, 2) longest prefix matching for variable-length and unbounded names and 3) large- scale flow maintenance. We also present five forwarding plane design principles for achieving 1 Gbps throughput in software implementation and 10 Gbps with hardware acceleration.", "title": "" } ]
[ { "docid": "7eb9e3aac9d25e3ae0628ffe0beea533", "text": "Many believe that an essential component for the discovery of the tremendous diversity in natural organisms was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspring. One hypothesized mechanism for evolvability is developmental canalization, wherein certain dimensions of variation become more likely to be traversed and others are prevented from being explored (e.g., offspring tend to have similar-size legs, and mutations affect the length of both legs, not each leg individually). While ubiquitous in nature, canalization is rarely reported in computational simulations of evolution, which deprives us of in silico examples of canalization to study and raises the question of which conditions give rise to this form of evolvability. Answering this question would shed light on why such evolvability emerged naturally, and it could accelerate engineering efforts to harness evolution to solve important engineering challenges. In this article, we reveal a unique system in which canalization did emerge in computational evolution. We document that genomes entrench certain dimensions of variation that were frequently explored during their evolutionary history. The genetic representation of these organisms also evolved to be more modular and hierarchical than expected by chance, and we show that these organizational properties correlate with increased fitness. Interestingly, the type of computational evolutionary experiment that produced this evolvability was very different from traditional digital evolution in that there was no objective, suggesting that open-ended, divergent evolutionary processes may be necessary for the evolution of evolvability.", "title": "" }, { "docid": "ea94a3c561476e88d5ac2640656a3f92", "text": "Point cloud is a basic description of discrete shape information. Parameterization of unorganized points is important for shape analysis and shape reconstruction of natural objects. In this paper we present a new algorithm for global parameterization of an unorganized point cloud and its application to the meshing of the cloud. Our method is guided by principal directions so as to preserve the intrinsic geometric properties. After initial estimation of principal directions, we develop a kNN(k-nearest neighbor) graph-based method to get a smooth direction field. Then the point cloud is cut to be topologically equivalent to a disk. The global parameterization is computed and its gradients align well with the guided direction field. A mixed integer solver is used to guarantee a seamless parameterization across the cut lines. The resultant parameterization can be used to triangulate and quadrangulate the point cloud simultaneously in a fully automatic manner, where the shape of the data is of any genus. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "05a5e98ad70d9206f2ef1444500050fe", "text": "The integration of business processes across organizations is typically beneficial for all involved parties. However, the lack of trust is often a roadblock. Blockchain is an emerging technology for decentralized and transactional data sharing across a network of untrusted participants. It can be used to find agreement about the shared state of collaborating parties without trusting a central authority or any particular participant. Some blockchain networks also provide a computational infrastructure to run autonomous programs called smart contracts. In this paper, we address the fundamental problem of trust in collaborative process execution using blockchain. We develop a technique to integrate blockchain into the choreography of processes in such a way that no central authority is needed, but trust maintained. Our solution comprises the combination of an intricate set of components, which allow monitoring or coordination of business processes. We implemented our solution and demonstrate its feasibility by applying it to three use case processes. Our evaluation includes the creation of more than 500 smart contracts and the execution over 8,000 blockchain transactions.", "title": "" }, { "docid": "d90467d05b4df62adc94b7c150013968", "text": "Bacterial flagella and type III secretion system (T3SS) are evolutionarily related molecular transport machineries. Flagella mediate bacterial motility; the T3SS delivers virulence effectors to block host defenses. The inflammasome is a cytosolic multi-protein complex that activates caspase-1. Active caspase-1 triggers interleukin-1β (IL-1β)/IL-18 maturation and macrophage pyroptotic death to mount an inflammatory response. Central to the inflammasome is a pattern recognition receptor that activates caspase-1 either directly or through an adapter protein. Studies in the past 10 years have established a NAIP-NLRC4 inflammasome, in which NAIPs are cytosolic receptors for bacterial flagellin and T3SS rod/needle proteins, while NLRC4 acts as an adapter for caspase-1 activation. Given the wide presence of flagella and the T3SS in bacteria, the NAIP-NLRC4 inflammasome plays a critical role in anti-bacteria defenses. Here, we review the discovery of the NAIP-NLRC4 inflammasome and further discuss recent advances related to its biochemical mechanism and biological function as well as its connection to human autoinflammatory disease.", "title": "" }, { "docid": "0de95645a74d401ad0d0d608faaa0d1d", "text": "This contribution describes the research activity on the development of different smart pixel topologies aimed at three-dimensional (3D) vision applications exploiting the multiple-pulse indirect time-of-flight (TOF) and standard direct TOF techniques. The proposed approaches allow for the realization of scannerless laser ranging systems capable of fast collection of 3D data sets, as required in a growing number of applications like, automotive, security, surveillance and robotic guidance. Single channel approach, as well as matrix-organized sensors, will be described, facing the demanding constraints of specific applications, like the high dynamic range capability and the background immunity. Real time range (3D) and intensity (2D) imaging of non-cooperative targets, also in presence of strong background illumination, has been successfully performed in the 2m-9m range with a precision better than 5% and an accuracy of about 1%.", "title": "" }, { "docid": "597d42e66f8bb9731cd6203b82213222", "text": "Text classification is the process of classifying documents into predefined categories based on their content. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. We have proposed a Text Classification system for classifying abstract of different research papers. In this System we have extracted keywords using Porter Stemmer and Tokenizer. The word set is formed from the derived keywords using Association Rule and Apriori algorithm. The Probability of the word set is calculated using naive bayes classifier and then the new abstract inserted by the user is classified as belonging to one of the various classes. The accuracy of the system is found satisfactory. It requires less training data as compared to other classification system.", "title": "" }, { "docid": "f06ec75f4835b6eabe50826f075e1fa1", "text": "In this paper, we propose a robust methodology to assess the value of microblogging data to forecast stock market variables: returns, volatility and trading volume of diverse indices and portfolios. The methodology uses sentiment and attention indicators extracted from microblogs (a large Twitter dataset is adopted) and survey indices (AAII and II, USMC and Sentix), diverse forms to daily aggregate these indicators, usage of a Kalman Filter to merge microblog and survey sources, a realistic rolling windows evaluation, several Machine Learning methods and the Diebold-Mariano test to validate if the sentiment and attention based predictions are valuable when compared with an autoregressive baseline. We found that Twitter sentiment and posting volume were relevant for the forecasting of returns of S&P 500 index, portfolios of lower market capitalization and some industries. Additionally, KF sentiment was informative for the forecasting of returns. Moreover, Twitter and KF sentiment indicators were useful for the prediction of some survey sentiment indicators. These results confirm the usefulness of microblogging data for financial expert systems, allowing to predict stock market behavior and providing a valuable alternative for existing survey measures with advantages (e.g., fast and cheap creation, daily frequency). © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "63f20dd528d54066ed0f189e4c435fe7", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "58b121012d9772285af95520fab7eaa0", "text": "We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.", "title": "" }, { "docid": "fd64292513423ee695a9cb0f0987a87b", "text": "Most observer-based methods applied in fault detection and diagnosis (FDD) schemes use the classical twodegrees of freedom observer structure in which a constant matrix is used to stabilize the error dynamics while a post filter helps to achieve some desired properties for the residual signal. In this paper, we consider the use of a more general framework which is the dynamic observer structure in which an observer gain is seen as a filter designed so that the error dynamics has some desirable frequency domain characteristics. This structure offers extra degrees of freedom and we show how it can be used for the sensor faults diagnosis problem achieving detection and estimation at the same time. The use of weightings to transform this problem into a standard H∞ problem is also demonstrated.", "title": "" }, { "docid": "ca1b189815ce5eb56c2b44e2c0c154aa", "text": "Synthetic data sets can be useful in a variety of situations, including repeatable regression testing and providing realistic - but not real - data to third parties for testing new software. Researchers, engineers, and software developers can test against a safe data set without affecting or even accessing the original data, insulating them from privacy and security concerns as well as letting them generate larger data sets than would be available using only real data. Practitioners use data mining technology to discover patterns in real data sets that aren't apparent at the outset. This article explores how to combine information derived from data mining applications with the descriptive ability of synthetic data generation software. Our goal is to demonstrate that at least some data mining techniques (in particular, a decision tree) can discover patterns that we can then use to inverse map into synthetic data sets. These synthetic data sets can be of any size and will faithfully exhibit the same (decision tree) patterns. Our work builds on two technologies: synthetic data definition language and predictive model markup language.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "d88c13d1c332f943464733cfd5acef67", "text": "Social media and social networks are embedded in our society to a point that could not have been imagined only ten years ago. Facebook, LinkedIn, and Twitter are already well known social networks that have a large audience in all age groups. The amount of data that those social sites gather from their users is continually increasing and this data is very valuable for marketing, research, and various other purposes. At the same time, this data usually contain a significant amount of sensitive information which should be protected against unauthorized disclosure. To protect the privacy of individuals, this data must be anonymized such that the risk of re-identification of specific individuals is very low. In this paper we study if anonymized social networks preserve existing communities from the original social networks. To perform this study, we introduce two approaches to measure the community preservation between the initial network and its anonymized version. In the first approach we simply count how many nodes from the original communities remained in the same community after the processes of anonymization and de-anonymization. In the second approach we consider the community preservation for each node individually. Specifically, for each node, we compare the original and final communities to which the node belongs. To anonymize social networks we use two models, namely, k-anonymity for social networks and k-degree anonymity. To determine communities in social networks we use an existing community detection algorithm based on modularity quality function. Our experiments on publically available datasets show that anonymized social networks satisfactorily preserve the community structure of their original networks. 56 Alina Campan, Yasmeen Alufaisan, Traian Marius Truta TRANSACTIONS ON DATA PRIVACY 8 (2015)", "title": "" }, { "docid": "b7d1428434a7274b55a00bce2cc0cf4f", "text": "This paper studies wideband hybrid precoder for downlink space-division multiple-access and orthogonal frequency-division multiple-access (SDMA-OFDMA) massive multi-input multi-output (MIMO) systems. We first derive an iterative algorithm to alternatingly optimize the phase-shifter based wideband analog precoder and low-dimensional digital precoders, then an efficient low-complexity non-iterative hybrid precoder proposes. Simulation results show that in wideband systems the performance of hybrid precoder is affected by the employed frequency-domain scheduling method and the number of available radio frequency (RF) chains, which can perform as well as narrowband hybrid precoder when greedy scheduling is employed and the number of RF chains is large.", "title": "" }, { "docid": "e284ee49cdb78d3a9eec6daab37dd7e4", "text": "This paper presents the design, simulation, and implementation of band pass filters in rectangular waveguides with radius, having 0.1 dB pass band ripple and 6.3% ripple at the center frequency of 14.2 GHz. A Mician microwave wizard software based on the Mode Matching Method (MMM) was used to simulate the structure of the filter. Simulation results are in good agreement with the measured one which improve the validity of the waveguide band pass filter design method.", "title": "" }, { "docid": "7e047b7c0a0ded44106ce6b50726d092", "text": "Skeleton-based action recognition task is entangled with complex spatio-temporal variations of skeleton joints, and remains challenging for Recurrent Neural Networks (RNNs). In this work, we propose a temporal-then-spatial recalibration scheme to alleviate such complex variations, resulting in an end-to-end Memory Attention Networks (MANs) which consist of a Temporal Attention Recalibration Module (TARM) and a Spatio-Temporal Convolution Module (STCM). Specifically, the TARM is deployed in a residual learning module that employs a novel attention learning network to recalibrate the temporal attention of frames in a skeleton sequence. The STCM treats the attention calibrated skeleton joint sequences as images and leverages the Convolution Neural Networks (CNNs) to further model the spatial and temporal information of skeleton data. These two modules (TARM and STCM) seamlessly form a single network architecture that can be trained in an end-to-end fashion. MANs significantly boost the performance of skeleton-based action recognition and achieve the best results on four challenging benchmark datasets: NTU RGB+D, HDM05, SYSU-3D and UT-Kinect.1", "title": "" }, { "docid": "b638e384285bbb03bdc71f2eb2b27ff8", "text": "In this paper, we present two win predictors for the popular online game Dota 2. The first predictor uses full post-match data and the second predictor uses only hero selection data. We will explore and build upon existing work on the topic as well as detail the specifics of both algorithms including data collection, exploratory analysis, feature selection, modeling, and results.", "title": "" }, { "docid": "2b6087cab37980b1363b343eb0f81822", "text": "We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.", "title": "" }, { "docid": "82e7bdd78261e7339472c7278bff97ca", "text": "A novel antenna with both horizontal and vertical polarizations is proposed for 1.7-2.1 GHz LTE band small cell base stations. Horizontal polarization is achieved by using the Vivaldi antennas at the main PCB board in azimuth plane, whereas the vertical polarization is obtained using the rectangular monopole with curved corners in proximity of the horizontal elements. A prototype antenna associated with 8-elements (four horizontal and four vertical) is fabricated on the FR4 substrate with the thickness of 0.2 cm and 0.12 cm for Vivaldi and monopole antennas, respectively. Experimental results have validated the design procedure of the antenna with a volume of 14 × 14 × 4.5 cm3 and indicated the realization of the requirements for the small cell base station applications.", "title": "" } ]
scidocsrr
051c5f835af764e782465b3db6c8a188
Determinants of accepting wireless mobile data services in China
[ { "docid": "717bb81a5000035b1199eeb3b2308518", "text": "Technology acceptance research has tended to focus on instrumental beliefs such as perceived usefulness and perceived ease of use as drivers of usage intentions, with technology characteristics as major external stimuli. Behavioral sciences and individual psychology, however, suggest that social influences and personal traits such as individual innovativeness are potentially important determinants of adoption as well, and may be a more important element in potential adopters’ decisions. This paper models and tests these relationships in non-work settings among several latent constructs such as intention to adopt wireless mobile technology, social influences, and personal innovativeness. Structural equation analysis reveals strong causal relationships between the social influences, personal innovativeness and the perceptual beliefs—usefulness and ease of use, which in turn impact adoption intentions. The paper concludes with some important implications for both theory research and implementation strategies. q 2005 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "7317713e6725f6541e4197cb02525cd4", "text": "This survey describes the current state-of-the-art in the development of automated visual surveillance systems so as to provide researchers in the field with a summary of progress achieved to date and to identify areas where further research is needed. The ability to recognise objects and humans, to describe their actions and interactions from information acquired by sensors is essential for automated visual surveillance. The increasing need for intelligent visual surveillance in commercial, law enforcement and military applications makes automated visual surveillance systems one of the main current application domains in computer vision. The emphasis of this review is on discussion of the creation of intelligent distributed automated surveillance systems. The survey concludes with a discussion of possible future directions.", "title": "" }, { "docid": "b5f9535fb63cae3d115e1e5bded4795c", "text": "This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios.", "title": "" }, { "docid": "4ec7af75127df22c9cb7bd279cb2bcf3", "text": "This paper describes a real-time walking control system developed for the biped robots JOHNNIE and LOLA. Walking trajectories are planned on-line using a simplified robot model and modified by a stabilizing controller. The controller uses hybrid position/force control in task space based on a resolved motion rate scheme. Inertial stabilization is achieved by modifying the contact force trajectories. The paper includes an analysis of the dynamics of controlled bipeds, which is the basis for the proposed control system. The system was tested both in forward dynamics simulations and in experiments with JOHNNIE.", "title": "" }, { "docid": "c69c1ea60dd096005fa8a1d3b21d69ed", "text": "The presentation of information is a very important part of the comprehension of the whole. Therefore, the chosen visualization technique should be compatible with the content to be presented. An easy and fast visualization of the subjects developed by a research group, during certain periods, requires a dynamic visualization technique such as the Animated Word Cloud. With this technique, we were able to use the titles of bibliographic publications of researchers to present, in a clear and straightforward manner, information that is not easily evident just by reading its title. The synchronization of the videos generated from the Animated Word Clouds allows a deeper analysis, a quick and intuitive observation, and the perception of information presented simultaneously.", "title": "" }, { "docid": "49dc0f1c63cbccf1fac793b8514cb59e", "text": "The emergence of MIMO antennas and channel bonding in 802.11n wireless networks has resulted in a huge leap in capacity compared with legacy 802.11 systems. This leap, however, adds complexity to selecting the right transmission rate. Not only does the appropriate data rate need to be selected, but also the MIMO transmission technique (e.g., Spatial Diversity or Spatial Multiplexing), the number of streams, and the channel width. Incorporating these features into a rate adaptation (RA) solution requires a new set of rules to accurately evaluate channel conditions and select the appropriate transmission setting with minimal overhead. To address these challenges, we propose ARAMIS (Agile Rate Adaptation for MIMO Systems), a standard-compliant, closed-loop RA solution that jointly adapts rate and bandwidth. ARAMIS adapts transmission rates on a per-packet basis; we believe it is the first 802.11n RA algorithm that simultaneously adapts rate and channel width. We have implemented ARAMIS on Atheros-based devices and deployed it on our 15-node testbed. Our experiments show that ARAMIS accurately adapts to a wide variety of channel conditions with negligible overhead. Furthermore, ARAMIS outperforms existing RA algorithms in 802.11n environments with up to a 10 fold increase in throughput.", "title": "" }, { "docid": "71f388d3a2b50856c5529667df39602c", "text": "Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.", "title": "" }, { "docid": "12b855b39278c49d448fbda9aa56cacf", "text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.", "title": "" }, { "docid": "788501e065d2901e6a85287d62b4c941", "text": "D-amino acid oxidase (DAO) is a flavoenzyme that metabolizes certain D-amino acids, notably the endogenous N-methyl D-aspartate receptor (NMDAR) co-agonist, D-serine. As such, it has the potential to modulate the function of NMDAR and to contribute to the widely hypothesized involvement of NMDAR signalling in schizophrenia. Three lines of evidence now provide support for this possibility: DAO shows genetic associations with the disorder in several, although not all, studies; the expression and activity of DAO are increased in schizophrenia; and DAO inactivation in rodents produces behavioural and biochemical effects, suggestive of potential therapeutic benefits. However, several key issues remain unclear. These include the regional, cellular and subcellular localization of DAO, the physiological importance of DAO and its substrates other than D-serine, as well as the causes and consequences of elevated DAO in schizophrenia. Herein, we critically review the neurobiology of DAO, its involvement in schizophrenia, and the therapeutic value of DAO inhibition. This review also highlights issues that have a broader relevance beyond DAO itself: how should we weigh up convergent and cumulatively impressive, but individually inconclusive, pieces of evidence regarding the role that a given gene may have in the aetiology, pathophysiology and pharmacotherapy of schizophrenia?", "title": "" }, { "docid": "81d933a449c0529ab40f5661f3b1afa1", "text": "Scene classification plays a key role in interpreting the remotely sensed high-resolution images. With the development of deep learning, supervised learning in classification of Remote Sensing with convolutional networks (CNNs) has been frequently adopted. However, researchers paid less attention to unsupervised learning in remote sensing with CNNs. In order to filling the gap, this paper proposes a set of CNNs called Multiple lAyeR feaTure mAtching(MARTA) generative adversarial networks (GANs) to learn representation using only unlabeled data. There will be two models of MARTA GANs involved: (1) a generative model G that captures the data distribution and provides more training data; (2) a discriminative model D that estimates the possibility that a sample came from the training data rather than G and in this way a well-formed representation of dataset can be learned. Therefore, MARTA GANs obtain the state-of-the-art results which outperform the results got from UC-Merced Land-use dataset and Brazilian Coffee Scenes dataset.", "title": "" }, { "docid": "bd3f7e8e4416f67cb6e26ce0575af624", "text": "Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.", "title": "" }, { "docid": "e50c07aa28cafffc43dd7eb29892f10f", "text": "Recent approaches to the Automatic Postediting (APE) of Machine Translation (MT) have shown that best results are obtained by neural multi-source models that correct the raw MT output by also considering information from the corresponding source sentence. To this aim, we present for the first time a neural multi-source APE model based on the Transformer architecture. Moreover, we employ sequence-level loss functions in order to avoid exposure bias during training and to be consistent with the automatic evaluation metrics used for the task. These are the main features of our submissions to the WMT 2018 APE shared task, where we participated both in the PBSMT subtask (i.e. the correction of MT outputs from a phrase-based system) and in the NMT subtask (i.e. the correction of neural outputs). In the first subtask, our system improves over the baseline up to -5.3 TER and +8.23 BLEU points ranking second out of 11 submitted runs. In the second one, characterized by the higher quality of the initial translations, we report lower but statistically significant gains (up to -0.38 TER and +0.8 BLEU), ranking first out of 10 submissions.", "title": "" }, { "docid": "5f0d4437ea08a4f0946ca04db7359ebc", "text": "The photosynthesis of previtamin D3 from 7-dehydrocholesterol in human skin was determined after exposure to narrow-band radiation or simulated solar radiation. The optimum wavelengths for the production of previtamin D3 were determined to be between 295 and 300 nanometers. When human skin was exposed to 295-nanometer radiation, up to 65 percent of the original 7-dehydrocholesterol content was converted to previtamin D3. In comparison, when adjacent skin was exposed to simulated solar radiation, the maximum formation of previtamin D3 was about 20 percent. Major differences in the formation of lumisterol3, and tachysterol3 from previtamin D3 were also observed. It is concluded that the spectral character of natural sunlight has a profound effect on the photochemistry of 7-dehydrocholesterol in human skin.", "title": "" }, { "docid": "3fb8519ca0de4871b105df5c5d8e489f", "text": "Intra-Body Communication (IBC), which modulates ionic currents over the human body as the communication medium, offers a low power and reliable signal transmission method for information exchange across the body. This paper first briefly reviews the quasi-static electromagnetic (EM) field modeling for a galvanic-type IBC human limb operating below 1 MHz and obtains the corresponding transfer function with correction factor using minimum mean square error (MMSE) technique. Then, the IBC channel characteristics are studied through the comparison between theoretical calculations via this transfer function and experimental measurements in both frequency domain and time domain. High pass characteristics are obtained in the channel gain analysis versus different transmission distances. In addition, harmonic distortions are analyzed in both baseband and passband transmissions for square input waves. The experimental results are consistent with the calculation results from the transfer function with correction factor. Furthermore, we also explore both theoretical and simulation results for the bit-error-rate (BER) performance of several common modulation schemes in the IBC system with a carrier frequency of 500 kHz. It is found that the theoretical results are in good agreement with the simulation results.", "title": "" }, { "docid": "4f9b66eb63cd23cd6364992759269a2c", "text": "In this paper, we present the concept of diffusing models to perform image-to-image matching. Having two images to match, the main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes. We illustrate this concept by an analogy with Maxwell's demons. We show that this concept relates to more traditional ones, based on attraction, with an intermediate step being optical flow techniques. We use the concept of diffusing models to derive three different non-rigid matching algorithms, one using all the intensity levels in the static image, one using only contour points, and a last one operating on already segmented images. Finally, we present results with synthesized deformations and real medical images, with applications to heart motion tracking and three-dimensional inter-patients matching.", "title": "" }, { "docid": "b8def6380ef69091bec0d4e7b5442f57", "text": "In a number of key IC fabrication steps in-process wafers are sensitive to moisture, oxygen and other airborne molecular contaminants in the air. Nitrogen purge of closed Front Opening Unified Pods (FOUP) have been implemented in many fabs to minimize wafer's exposure to the contaminants (or CDA purge if oxygen is not of concern). As the technology node advances, the need for minimizing the exposure has become even more stringent and in some processes requires FOUP purge while the FOUP door is off on an EFEM loadport. This requirement brings unique challenges to FOUP purge, especially at the front locations near FOUP opening, where EFEM air constantly tries to enter the FOUP. In this paper we present Entegris' latest experimental study on understanding the unique challenges of FOUP door-off purge and the excellent test results of newly designed advanced FOUP with purge flow distribution manifolds (diffusers).", "title": "" }, { "docid": "3a4841b9aefdd0f96125132eaabdac49", "text": "Unstructured text data produced on the internet grows rapidly, and sentiment analysis for short texts becomes a challenge because of the limit of the contextual information they usually contain. Learning good vector representations for sentences is a challenging task and an ongoing research area. Moreover, learning long-term dependencies with gradient descent is difficult in neural network language model because of the vanishing gradients problem. Natural Language Processing (NLP) systems traditionally treat words as discrete atomic symbols; the model can leverage small amounts of information regarding the relationship between the individual symbols. In this paper, we propose ConvLstm, neural network architecture that employs Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) on top of pre-trained word vectors. In our experiments, ConvLstm exploit LSTM as a substitute of pooling layer in CNN to reduce the loss of detailed local information and capture long term dependencies in sequence of sentences. We validate the proposed model on two sentiment datasets IMDB, and Stanford Sentiment Treebank (SSTb). Empirical results show that ConvLstm achieved comparable performances with less parameters on sentiment analysis tasks.", "title": "" }, { "docid": "109b1ec344802099e833a5988832945b", "text": "In this paper, we consider the problem of learning representations for authors from bibliographic co-authorship networks. Existing methods for deep learning on graphs, such as DeepWalk, suffer from link sparsity problem as they focus on modeling the link information only. We hypothesize that capturing both the content and link information in a unified way will help mitigate the sparsity problem. To this end, we present a novel model ‘Author2Vec’ , which learns lowdimensional author representations such that authors who write similar content and share similar network structure are closer in vector space. Such embeddings are useful in a variety of applications such as link prediction, node classification, recommendation and visualization. The author embeddings we learn are empirically shown to outperform DeepWalk by 2.35% and 0.83% for link prediction and clustering task respectively.", "title": "" }, { "docid": "fe446f500549cedce487b78a133cbc45", "text": "Drug addiction manifests as a compulsive drive to take a drug despite serious adverse consequences. This aberrant behaviour has traditionally been viewed as bad 'choices' that are made voluntarily by the addict. However, recent studies have shown that repeated drug use leads to long-lasting changes in the brain that undermine voluntary control. This, combined with new knowledge of how environmental, genetic and developmental factors contribute to addiction, should bring about changes in our approach to the prevention and treatment of addiction.", "title": "" }, { "docid": "5f526d3ac8329fb801ece415f78eb343", "text": "Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. The survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation.", "title": "" }, { "docid": "2b4b822d722fac299ae7504078d87fd0", "text": "LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us ([email protected]). Our goal is to make the dataset reliable and useful for the community.", "title": "" } ]
scidocsrr
23a6b86e263bee0df6297d134d1132ba
Lifted Probabilistic Inference with Counting Formulas
[ { "docid": "219a90eb2fd03cd6cc5d89fda740d409", "text": "The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations", "title": "" }, { "docid": "8dc493568e94d94370f78e663da7df96", "text": "Expertise in C++, C, Perl, Haskell, Linux system administration. Technical experience in compiler design and implementation, release engineering, network administration, FPGAs, hardware design, probabilistic inference, machine learning, web search engines, cryptography, datamining, databases (SQL, Oracle, PL/SQL, XML), distributed knowledge bases, machine vision, automated web content generation, 2D and 3D graphics, distributed computing, scientific and numerical computing, optimization, virtualization (Xen, VirtualBox). Also experience in risk analysis, finance, game theory, firm behavior, international economics. Familiar with Java, C++ Standard Template Library, Java Native Interface, Java Foundation Classes, Android development, MATLAB, CPLEX, NetPBM, Cascading Style Sheets (CSS), Tcl/Tk, Windows system administration, Mac OS X system administration, ElasticSearch, modifying the Ubuntu installer.", "title": "" }, { "docid": "5536e605e0b8a25ee0a5381025484f60", "text": "Relational Markov Random Fields are a general and flexible framework for reasoning about the joint distribution over attributes of a large number of interacting entities. The main computational difficulty in learning such models is inference. Even when dealing with complete data, where one can summarize a large domain by sufficient statistics, learning requires one to compute the expectation of the sufficient statistics given different parameter choices. The typical solution to this problem is to resort to approximate inference procedures, such as loopy belief propagation. Although these procedures are quite efficient, they still require computation that is on the order of the number of interactions (or features) in the model. When learning a large relational model over a complex domain, even such approximations require unrealistic running time. In this paper we show that for a particular class of relational MRFs, which have inherent symmetry, we can perform the inference needed for learning procedures using a template-level belief propagation. This procedure’s running time is proportional to the size of the relational model rather than the size of the domain. Moreover, we show that this computational procedure is equivalent to sychronous loopy belief propagation. This enables a dramatic speedup in inference and learning time. We use this procedure to learn relational MRFs for capturing the joint distribution of large protein-protein interaction networks.", "title": "" }, { "docid": "7fc6ffb547bc7a96e360773ce04b2687", "text": "Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference.", "title": "" }, { "docid": "db897ae99b6e8d2fc72e7d230f36b661", "text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.", "title": "" }, { "docid": "93f1e6d0e14ce5aa07b32ca6bdf3dee4", "text": "Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satis ability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, nding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called \\conditioning search\" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.", "title": "" } ]
[ { "docid": "190d238e9fd3701c01a8408258d0fac6", "text": "Depression and anxiety load in families. In the present study, we focus on exposure to parental negative emotions in first postnatal year as a developmental pathway to early parent-to-child transmission of depression and anxiety. We provide an overview of the little research available on the links between infants' exposure to negative emotion and infants' emotional development in this developmentally sensitive period, and highlight priorities for future research. To address continuity between normative and maladaptive development, we discuss exposure to parental negative emotions in infants of parents with as well as without depression and/or anxiety diagnoses. We focus on infants' emotional expressions in everyday parent-infant interactions, and on infants' attention to negative facial expressions as early indices of emotional development. Available evidence suggests that infants' emotional expressions echo parents' expressions and reactions in everyday interactions. In turn, infants exposed more to negative emotions from the parent seem to attend less to negative emotions in others' facial expressions. The links between exposure to parental negative emotion and development hold similarly in infants of parents with and without depression and/or anxiety diagnoses. Given its potential links to infants' emotional development, and to later psychological outcomes in children of parents with depression and anxiety, we conclude that early exposure to parental negative emotions is an important developmental mechanism that awaits further research. Longitudinal designs that incorporate the study of early exposure to parents' negative emotion, socio-emotional development in infancy, and later psychological functioning while considering other genetic and biological vulnerabilities should be prioritized in future research.", "title": "" }, { "docid": "986f469fc8d367baa8ad0db10caf3241", "text": "While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.", "title": "" }, { "docid": "3ddf82be24ab5e20c141f67dfde05fdc", "text": "In August 1998, Texas AM University implemented on campus a trap-test-vaccinate-alter-return-monitor (TTVARM) program to manage the feral cat population. TTVARM is an internationally recognized term for trapping and neutering programs aimed at management of feral cat populations. In this article we summarize results of the program for the period August 1998 to July 2000. In surgery laboratories, senior veterinary students examined cats that were humanely trapped once a month and tested them for feline leukemia and feline immunodeficiency virus infections, vaccinated, and surgically neutered them. They euthanized cats testing positive for either infectious disease. Volunteers provided food and observed the cats that were returned to their capture sites on campus and maintained in managed colonies. The program placed kittens and tame cats for adoption; cats totaled 158. Of the majority of 158 captured cats, there were less kittens caught in Year 2 than in Year 1. The proportion of tame cats trapped was significantly greater in Year 2 than in Year 1. The prevalence found for feline leukemia and feline immunodeficiency virus ELISA test positives was 5.8% and 6.5%, respectively. Following surgery, 101 cats returned to campus. The project recaptured, retested, and revaccinated more than one-fourth of the cats due for their annual vaccinations. The program placed 32 kittens, juveniles, and tame adults for adoption. The number of cat complaints received by the university's pest control service decreased from Year 1 to Year 2.", "title": "" }, { "docid": "3f0d37296258c68a20da61f34364405d", "text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.", "title": "" }, { "docid": "7ce147a433a376dd1cc0f7f09576e1bd", "text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).", "title": "" }, { "docid": "2c3566048334e60ae3f30bd631e4da87", "text": "The Indian Railways is world&apos;s fourth largest railway network in the world after USA, Russia and China. There is a severe problem of collisions of trains. So Indian railway is working in this aspect to promote the motto of &quot;SAFE JOURNEY&quot;. A RFID based railway track finding system for railway has been proposed in this paper. In this system the RFID tags and reader are used which are attached in the tracks and engine consecutively. So Train engine automatically get the data of path by receiving it from RFID tag and detect it. If path is correct then train continue to run on track and if it is wrong then a signal is generated and sent to the control station and after this engine automatically stop in a minimum time and the display of LCD show the &quot;WRONG PATH&quot;. So the collision and accident of train can be avoided. With the help of this system the train engine would be programmed to move according to the requirement. The another feature of this system is automatic track changer by which the track jointer would move automatically according to availability of trains.", "title": "" }, { "docid": "08fedcf80c0905de2598ccd45da706a5", "text": "Translation of named entities (NEs), such as person names, organization names and location names is crucial for cross lingual information retrieval, machine translation, and many other natural language processing applications. Newly named entities are introduced on daily basis in newswire and this greatly complicates the translation task. Also, while some names can be translated, others must be transliterated, and, still, others are mixed. In this paper we introduce an integrated approach for named entity translation deploying phrase-based translation, word-based translation, and transliteration modules into a single framework. While Arabic based, the approach introduced here is a unified approach that can be applied to NE translation for any language pair.", "title": "" }, { "docid": "73bf620a97b2eadeb2398dd718b85fe8", "text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.", "title": "" }, { "docid": "4451f35b38f0b3af0ff006d8995b0265", "text": "Social media together with still growing social media communities has become a powerful and promising solution in crisis and emergency management. Previous crisis events have proved that social media and mobile technologies used by citizens (widely) and public services (to some extent) have contributed to the post-crisis relief efforts. The iSAR+ EU FP7 project aims at providing solutions empowering citizens and PPDR (Public Protection and Disaster Relief) organizations in online and mobile communications for the purpose of crisis management especially in search and rescue operations. This paper presents the results of survey aiming at identification of preliminary end-user requirements in the close interworking with end-users across Europe.", "title": "" }, { "docid": "6646b66370ed02eb84661c8505eb7563", "text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.", "title": "" }, { "docid": "fb8b90ccf64f64e7f5c4e2c6718107df", "text": "The Standardized Precipitation Evapotranspiration Index (SPEI) was developed in 2010 and has been used in an increasing number of climatology and hydrology studies. The objective of this article is to describe computing options that provide flexible and robust use of the SPEI. In particular, we present methods for estimating the parameters of the log-logistic distribution for obtaining standardized values, methods for computing reference evapotranspiration (ET0), and weighting kernels used for calculation of the SPEI at different time scales. We discuss the use of alternative ET0 and actual evapotranspiration (ETa) methods and different options on the resulting SPEI series by use of observational and global gridded data. The results indicate that the equation used to calculate ET0 can have a significant effect on the SPEI in some regions of the world. Although the original formulation of the SPEI was based on plotting-positions Probability Weighted Moment (PWM), we now recommend use of unbiased PWM for model fitting. Finally, we present new software tools for computation and analysis of SPEI series, an updated global gridded database, and a realtime drought-monitoring system.", "title": "" }, { "docid": "31756ac6aaa46df16337dbc270831809", "text": "Broadly speaking, the goal of neuromorphic engineering is to build computer systems that mimic the brain. Spiking Neural Network (SNN) is a type of biologically-inspired neural networks that perform information processing based on discrete-time spikes, different from traditional Artificial Neural Network (ANN). Hardware implementation of SNNs is necessary for achieving high-performance and low-power. We present the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on SNN implemented with digitallogic, supporting a maximum of 2048 neurons, 20482 = 4194304 synapses, and 15 possible synaptic delays. The Darwin NPU was fabricated by standard 180 nm CMOS technology with an area size of 5 ×5 mm2 and 70 MHz clock frequency at the worst case. It consumes 0.84 mW/MHz with 1.8 V power supply for typical applications. Two prototype applications are used to demonstrate the performance and efficiency of the hardware implementation. 脉冲神经网络(SNN)是一种基于离散神经脉冲进行信息处理的人工神经网络。本文提出的“达尔文”芯片是一款基于SNN的类脑硬件协处理器。它支持神经网络拓扑结构,神经元与突触各种参数的灵活配置,最多可支持2048个神经元,四百万个神经突触及15个不同的突触延迟。该芯片采用180纳米CMOS工艺制造,面积为5x5平方毫米,最坏工作频率达到70MHz,1.8V供电下典型应用功耗为0.84mW/MHz。基于该芯片实现了两个应用案例,包括手写数字识别和运动想象脑电信号分类。", "title": "" }, { "docid": "83ed8190d8f0715d79580043b83d3620", "text": "We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20% for person identification and 12% for face recognition.", "title": "" }, { "docid": "97711981f9bfe4f9ba7b2070427988d4", "text": "Mathematical models have been used to provide an explicit framework for understanding malaria transmission dynamics in human population for over 100 years. With the disease still thriving and threatening to be a major source of death and disability due to changed environmental and socio-economic conditions, it is necessary to make a critical assessment of the existing models, and study their evolution and efficacy in describing the host-parasite biology. In this article, starting from the basic Ross model, the key mathematical models and their underlying features, based on their specific contributions in the understanding of spread and transmission of malaria have been discussed. The first aim of this article is to develop, starting from the basic models, a hierarchical structure of a range of deterministic models of different levels of complexity. The second objective is to elaborate, using some of the representative mathematical models, the evolution of modelling strategies to describe malaria incidence by including the critical features of host-vector-parasite interactions. Emphasis is more on the evolution of the deterministic differential equation based epidemiological compartment models with a brief discussion on data based statistical models. In this comprehensive survey, the approach has been to summarize the modelling activity in this area so that it helps reach a wider range of researchers working on epidemiology, transmission, and other aspects of malaria. This may facilitate the mathematicians to further develop suitable models in this direction relevant to the present scenario, and help the biologists and public health personnel to adopt better understanding of the modelling strategies to control the disease", "title": "" }, { "docid": "4ae7e3cb36dd23cfe41e743e47844cb7", "text": "We present a voltage-scalable and process-variation resilient memory architecture, suitable for MPEG-4 video processors such that power dissipation can be traded for graceful degradation in \"quality\". The key innovation in our proposed work is a hybrid memory array, which is mixture of conventional 6T and 8T SRAM bit-cells. The fundamental premise of our approach lies in the fact that human visual system (HVS) is mostly sensitive to higher order bits of luminance pixels in video data. We implemented a preferential storage policy in which the higher order luma bits are stored in robust 8T bit-cells while the lower order bits are stored in conventional 6T bit-cells. This facilitates aggressive scaling of supply voltage in memory as the important luma bits, stored in 8T bit-cells, remain relatively unaffected by voltage scaling. The not-so-important lower order luma bits, stored in 6T bit-cells, if affected, contribute insignificantly to the overall degradation in output video quality. Simulation results show average power savings of up to 56%, in the hybrid memory array compared to the conventional 6T SRAM array implemented in 65nm CMOS. The area overhead and maximum output quality degradation (PSNR) incurred were 11.5% and 0.56 dB, respectively.", "title": "" }, { "docid": "6c682f3412cc98eac5ae2a2356dccef7", "text": "Since their inception, micro-size light emitting diode (µLED) arrays based on III-nitride semiconductors have emerged as a promising technology for a range of applications. This paper provides an overview on a decade progresses on realizing III-nitride µLED based high voltage single-chip AC/DC-LEDs without power converters to address the key compatibility issue between LEDs and AC power grid infrastructure; and high-resolution solid-state self-emissive microdisplays operating in an active driving scheme to address the need of high brightness, efficiency and robustness of microdisplays. These devices utilize the photonic integration approach by integrating µLED arrays on-chip. Other applications of nitride µLED arrays are also discussed.", "title": "" }, { "docid": "b4dd76179734fb43e74c9c1daef15bbf", "text": "Breast cancer represents one of the diseases that make a high number of deaths every year. It is the most common type of all cancers and the main cause of women’s deaths worldwide. Classification and data mining methods are an effective way to classify data. Especially in medical field, where those methods are widely used in diagnosis and analysis to make decisions. In this paper, a performance comparison between different machine learning algorithms: Support Vector Machine (SVM), Decision Tree (C4.5), Naive Bayes (NB) and k Nearest Neighbors (k-NN) on the Wisconsin Breast Cancer (original) datasets is conducted. The main objective is to assess the correctness in classifying data with respect to efficiency and effectiveness of each algorithm in terms of accuracy, precision, sensitivity and specificity. Experimental results show that SVM gives the highest accuracy (97.13%) with lowest error rate. All experiments are executed within a simulation environment and conducted in WEKA data mining tool. © 2016 The Authors. Published by Elsevier B.V. Peer-review under responsibility of the Conference Program Chairs.", "title": "" }, { "docid": "4bc73a7e6a6975ba77349cac62a96c18", "text": "BACKGROUND\nIn May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear.\n\n\nMETHODS\nThe study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play.\n\n\nRESULTS\nOn average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found.\n\n\nCONCLUSIONS\nThe present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.", "title": "" }, { "docid": "9172d4ba2e86a7d4918ef64d7b837084", "text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.", "title": "" }, { "docid": "0a790469194c1984ae2175d9ea49688c", "text": "Gynecomastia refers to a benign enlargement of the male breast. This article describes the authors’ method of using power-assisted liposuction and gland removal through a subareolar incision for thin patients. Power-assisted liposuction is performed for removal of fatty breast tissue in the chest area to allow skin retraction. The subareolar incision is used to remove glandular tissue from a male subject considered to be within a normal weight range but who has bilateral grade 1 or 2 gynecomastia. Gynecomastia correction was successfully performed for all the patients. The average volume of aspirated fat breast was 100–200 ml on each side. Each breast had 5–80 g of breast tissue removed. At the 3-month, 6-month, and 1-year follow-up assessments, all the treated patients were satisfied with their aesthetic results. Liposuction has the advantages of reducing the fat tissue where necessary to allow skin retraction and of reducing the traces left by surgery. The combination of surgical excision and power-assisted lipoplasty also is a valid choice for the treatment of thin patients.", "title": "" } ]
scidocsrr
ec9d1ea5b46ac338f26de530bc117b04
Towards the Internet of Smart Trains: A Review on Industrial IoT-Connected Railways
[ { "docid": "d529d1052fce64ae05fbc64d2b0450ab", "text": "Today, many industrial companies must face problems raised by maintenance. In particular, the anomaly detection problem is probably one of the most challenging. In this paper we focus on the railway maintenance task and propose to automatically detect anomalies in order to predict in advance potential failures. We first address the problem of characterizing normal behavior. In order to extract interesting patterns, we have developed a method to take into account the contextual criteria associated to railway data (itinerary, weather conditions, etc.). We then measure the compliance of new data, according to extracted knowledge, and provide information about the seriousness and the exact localization of a detected anomaly. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7e647cac9417bf70acd8c0b4ee0faa9b", "text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.", "title": "" } ]
[ { "docid": "ba67c3006c6167550bce500a144e63f1", "text": "This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.", "title": "" }, { "docid": "14508a81494077406b90632d38e09d44", "text": "During realistic, continuous perception, humans automatically segment experiences into discrete events. Using a novel model of cortical event dynamics, we investigate how cortical structures generate event representations during narrative perception and how these events are stored to and retrieved from memory. Our data-driven approach allows us to detect event boundaries as shifts between stable patterns of brain activity without relying on stimulus annotations and reveals a nested hierarchy from short events in sensory regions to long events in high-order areas (including angular gyrus and posterior medial cortex), which represent abstract, multimodal situation models. High-order event boundaries are coupled to increases in hippocampal activity, which predict pattern reinstatement during later free recall. These areas also show evidence of anticipatory reinstatement as subjects listen to a familiar narrative. Based on these results, we propose that brain activity is naturally structured into nested events, which form the basis of long-term memory representations.", "title": "" }, { "docid": "8738ec0c6e265f0248d7fa65de4cdd05", "text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.", "title": "" }, { "docid": "4253afeaeb2f238339611e5737ed3e06", "text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.", "title": "" }, { "docid": "c6054c39b9b36b5d446ff8da3716ec30", "text": "The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency–Inverse Document Frequency (TF–IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset. To address this problem, we propose an approximate version of the TF–IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF–IDF calculation using Graphical Processing Units (GPUs). This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF–IDF measure performs at a level that is comparable to the solution of the precise TF–IDF measure. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "36cd997369a654567f2304070b22638c", "text": "There has been a recent increase in the prevalence of asthma worldwide; however, the 5-10% of patients with severe disease account for a substantial proportion of the health costs. Although most asthma cases can be satisfactorily managed with a combination of anti-inflammatory drugs and bronchodilators, patients who remain symptomatic despite maximum combination treatment represent a heterogeneous group consisting of those who are under-treated or non-adherent with their prescribed medication. After excluding under-treatment and poor compliance, corticosteroid refractory asthma can be identified as a subphenotype characterised by a heightened neutrophilic airway inflammatory response in the presence or absence of eosinophils, with evidence of increased tissue injury and remodelling. Although a wide range of environmental factors such as allergens, smoking, air pollution, infection, hormones, and specific drugs can contribute to this phenotype, other features associated with changes in the airway inflammatory response should be taken into account. Aberrant communication between an injured airway epithelium and underlying mesenchyme contributes to disease chronicity and refractoriness to corticosteroids. The importance of identifying underlying causative factors and the recent introduction of novel therapeutic approaches, including the targeting of immunoglobulin E and tumour necrosis factor alpha with biological agents, emphasise the need for careful phenotyping of patients with severe disease to target improved management of the individual patient's needs.", "title": "" }, { "docid": "49c4137c763c2f9bb48b2b95ace9623a", "text": "Multi-relational data, like knowledge graphs, are generated from multiple data sources by extracting entities and their relationships. We often want to include inferred, implicit or likely relationships that are not explicitly stated, which can be viewed as link-prediction in a graph. Tensor decomposition models have been shown to produce state-of-the-art results in link-prediction tasks. We describe a simple but novel extension to an existing tensor decomposition model to predict missing links using similarity among tensor slices, as opposed to an existing tensor decomposition models which assumes each slice to contribute equally in predicting links. Our extended model performs better than the original tensor decomposition and the non-negative tensor decomposition variant of it in an evaluation on several datasets.", "title": "" }, { "docid": "2f20f587bb46f7133900fd8c22cea3ab", "text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.", "title": "" }, { "docid": "f12749ba8911e8577fbde2327c9dc150", "text": "Regardless of successful applications of the convolutional neural networks (CNNs) in different fields, its application to seismic waveform classification and first-break (FB) picking has not been explored yet. This letter investigates the application of CNNs for classifying time-space waveforms from seismic shot gathers and picking FBs of both direct wave and refracted wave. We use representative subimage samples with two types of labeled waveform classification to supervise CNNs training. The goal is to obtain the optimal weights and biases in CNNs, which are solved by minimizing the error between predicted and target label classification. The trained CNNs can be utilized to automatically extract a set of time-space attributes or features from any subimage in shot gathers. These attributes are subsequently inputted to the trained fully connected layer of CNNs to output two values between 0 and 1. Based on the two-element outputs, a discriminant score function is defined to provide a single indication for classifying input waveforms. The FB is then located from the calculated score maps by sequentially using a threshold, the first local minimum rule of every trace and a median filter. Finally, we adopt synthetic and real shot data examples to demonstrate the effectiveness of CNNs-based waveform classification and FB picking. The results illustrate that CNN is an efficient automatic data-driven classifier and picker.", "title": "" }, { "docid": "1ee679d237c54dd8aaaeb2383d6b49fa", "text": "Bike sharing systems (BSSs) have become common in many cities worldwide, providing a new transportation mode for residents' commutes. However, the management of these systems gives rise to many problems. As the bike pick-up demands at different places are unbalanced at times, the systems have to be rebalanced frequently. Rebalancing the bike availability effectively, however, is very challenging as it demands accurate prediction for inventory target level determination. In this work, we propose two types of regression models using multi-source data to predict the hourly bike pick-up demand at cluster level: Similarity Weighted K-Nearest-Neighbor (SWK) based regression and Artificial Neural Network (ANN). SWK-based regression models learn the weights of several meteorological factors and/or taxi usage and use the correlation between consecutive time slots to predict the bike pick-up demand. The ANN is trained by using historical trip records of BSS, meteorological data, and taxi trip records. Our proposed methods are tested with real data from a New York City BSS: Citi Bike NYC. Performance comparison between SWK-based and ANN-based methods is provided. Experimental results indicate the high accuracy of ANN-based prediction for bike pick-up demand using multisource data.", "title": "" }, { "docid": "eed8fd39830e8058d55427623bb655df", "text": "In this paper, we present a solution for main content identification in web pages. Our solution is language-independent; Web pages may be written in different languages. It is topic-independent; no domain knowledge or dictionary is applied. And it is unsupervised; no training phase is necessary. The solution exploits the tree structure of web pages and the frequencies of text tokens to attribute scores of content density to the areas of the page and by the way identify the most important one. We tested this solution over representative examples of web pages to show how efficient and accurate it is. The results were satisfying.", "title": "" }, { "docid": "11f47bb575a6e50c3d3ccef0e75ff3b9", "text": "Corporate social responsibility is incorporated into strategic management at the enterprise strategy level. This paper delineates the domain of enterprise strategy by focusing on how well a firm's social performance matches its competences and stakeholders rather than on the \"quantity\" of a firm's social responsibility. Enterprise strategy is defined and a classification of enterprise strategies is set forth.", "title": "" }, { "docid": "197797b3bb51791a5986d0ee0ea04d2b", "text": "Energy harvesting for wireless communication networks is a new paradigm that allows terminals to recharge their batteries from external energy sources in the surrounding environment. A promising energy harvesting technology is wireless power transfer where terminals harvest energy from electromagnetic radiation. Thereby, the energy may be harvested opportunistically from ambient electromagnetic sources or from sources that intentionally transmit electromagnetic energy for energy harvesting purposes. A particularly interesting and challenging scenario arises when sources perform simultaneous wireless information and power transfer (SWIPT), as strong signals not only increase power transfer but also interference. This article provides an overview of SWIPT systems with a particular focus on the hardware realization of rectenna circuits and practical techniques that achieve SWIPT in the domains of time, power, antennas, and space. The article also discusses the benefits of a potential integration of SWIPT technologies in modern communication networks in the context of resource allocation and cooperative cognitive radio networks.", "title": "" }, { "docid": "ad2d21232d8a9af42ea7339574739eb3", "text": "Majority of CNN architecture design is aimed at achieving high accuracy in public benchmarks by increasing the complexity. Typically, they are over-specified by a large margin and can be optimized by a factor of 10-100x with only a small reduction in accuracy. In spite of the increase in computational power of embedded systems, these networks are still not suitable for embedded deployment. There is a large need to optimize for hardware and reduce the size of the network by orders of magnitude for computer vision applications. This has led to a growing community which is focused on designing efficient networks. However, CNN architectures are evolving rapidly and efficient architectures seem to lag behind. There is also a gap in understanding the hardware architecture details and incorporating it into the network design. The motivation of this paper is to systematically summarize efficient design techniques and provide guidelines for an application developer. We also perform a case study by benchmarking various semantic segmentation algorithms for autonomous driving.", "title": "" }, { "docid": "7e6e2d5fabb642fbb089c3e0c2f04921", "text": "Computer vision is one of the most active research fields in information technology today. Giving machines and robots the ability to see and comprehend the surrounding world at the speed of sight creates endless potential applications and opportunities. Feature detection and description algorithms can be indeed considered as the retina of the eyes of such machines and robots. However, these algorithms are typically computationally intensive, which prevents them from achieving the speed of sight real-time performance. In addition, they differ in their capabilities and some may favor and work better given a specific type of input compared to others. As such, it is essential to compactly report their pros and cons as well as their performances and recent advances. This paper is dedicated to provide a comprehensive overview on the state-of-the-art and recent advances in feature detection and description algorithms. Specifically, it starts by overviewing fundamental concepts. It then compares, reports and discusses their performance and capabilities. The Maximally Stable Extremal Regions algorithm and the Scale Invariant Feature Transform algorithms, being two of the best of their type, are selected to report their recent algorithmic derivatives.", "title": "" }, { "docid": "9586a8e41ca84dbb71c3764c88753efb", "text": "Indoor wireless systems often operate under non-line-of-sight (NLOS) conditions that can cause ranging errors for location-based applications. As such, these applications could benefit greatly from NLOS identification and mitigation techniques. These techniques have been primarily investigated for ultra-wide band (UWB) systems, but little attention has been paid to WiFi systems, which are far more prevalent in practice. In this study, we address the NLOS identification and mitigation problems using multiple received signal strength (RSS) measurements from WiFi signals. Key to our approach is exploiting several statistical features of the RSS time series, which are shown to be particularly effective. We develop and compare two algorithms based on machine learning and a third based on hypothesis testing to separate LOS/NLOS measurements. Extensive experiments in various indoor environments show that our techniques can distinguish between LOS/NLOS conditions with an accuracy of around 95%. Furthermore, the presented techniques improve distance estimation accuracy by 60% as compared to state-of-the-art NLOS mitigation techniques. Finally, improvements in distance estimation accuracy of 50% are achieved even without environment-specific training data, demonstrating the practicality of our approach to real world implementations.", "title": "" }, { "docid": "e8edd727e923595acc80df364bfc64af", "text": "Context: Architecture-centric software evolution (ACSE) enables changes in system’s structure and behaviour while maintaining a global view of the software to address evolution-centric trade-offs. The existing research and practices for ACSE primarily focus on design-time evolution and runtime adaptations to accommodate changing requirements in existing architectures. Objectives: We aim to identify, taxonomically classify and systematically compare the existing research focused on enabling or enhancing change reuse to support ACSE. Method: We conducted a systematic literature review of 32 qualitatively selected studies and taxonomically classified these studies based on solutions that enable (i) empirical acquisition and (ii) systematic application of architecture evolution reuse knowledge (AERK) to guide ACSE. Results: We identified six distinct research themes that support acquisition and application of AERK. We investigated (i) how evolution reuse knowledge is defined, classified and represented in the existing research to support ACSE and (ii) what are the existing methods, techniques and solutions to support empirical acquisition and systematic application of AERK. Conclusions: Change patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration analysis, evolution and maintenance prediction techniques (approximately 6% each). A lack of focus on empirical acquisition of reuse knowledge suggests the need of solutions with architecture change mining as a complementary and integrated phase for architecture change execution. Copyright © 2014 John Wiley & Sons, Ltd. Received 13 May 2013; Revised 23 September 2013; Accepted 27 December 2013", "title": "" }, { "docid": "f001f2933b3c96fe6954e086488776e0", "text": "Pd coated copper (PCC) wire and Au-Pd coated copper (APC) wire have been widely used in the field of LSI device. Recently, higher bond reliability at high temperature becomes increasingly important for on-vehicle devices. However, it has been reported that conventional PCC wire caused a bond failure at elevated temperatures. On the other hand, new-APC wire had higher reliability at higher temperature than conventional APC wire. New-APC wire has higher concentration of added element than conventional APC wire. In this paper, failure mechanism of conventional APC wire and improved mechanism of new-APC wire at high temperature were shown. New-APC wire is suitable for onvehicle devices.", "title": "" }, { "docid": "b18c8b7472ba03a260d63b886a6dc11d", "text": "In this paper, we propose a novel technique for automatic table detection in document images. Lines and tables are among the most frequent graphic, non-textual entities in documents and their detection is directly related to the OCR performance as well as to the document layout description. We propose a workflow for table detection that comprises three distinct steps: (i) image pre-processing; (ii) horizontal and vertical line detection and (iii) table detection. The efficiency of the proposed method is demonstrated by using a performance evaluation scheme which considers a great variety of documents such as forms, newspapers/magazines, scientific journals, tickets/bank cheques, certificates and handwritten documents.", "title": "" }, { "docid": "8b0ac11c05601e93557fe0d5097b4529", "text": "We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be \"target earners,\" contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.", "title": "" } ]
scidocsrr
572934ddf7fa587e3b790dacc9967b35
Really Uncertain Business Cycles
[ { "docid": "f1744cf87ee2321c5132d6ee30377413", "text": "How do movements in the distribution of income and wealth affect the macroeconomy? We analyze this question using a calibrated version of the stochastic growth model with partially uninsurable idiosyncratic risk and movements in aggregate productivity. Our main finding is that, in the stationary stochastic equilibrium, the behavior of the macroeconomic aggregates can be almost perfectly described using only the mean of the wealth distribution. This result is robust to substantial changes in both parameter values and model specification. Our benchmark model, whose only difference from the representative-agent framework is the existence of uninsurable idiosyncratic risk, displays far less cross-sectional dispersion", "title": "" } ]
[ { "docid": "8f9d5cd416ac038a4cbdf64737039053", "text": "This paper proposes a method to extract the feature points from faces automatically. It provides a feasible way to locate the positions of two eyeballs, near and far corners of eyes, midpoint of nostrils and mouth corners from face image. This approach would help to extract useful features on human face automatically and improve the accuracy of face recognition. The experiments show that the method presented in this paper could locate feature points from faces exactly and quickly.", "title": "" }, { "docid": "e6b27bb9f2b74791af5e74c16c7c47da", "text": "Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Supervised hashing, which improves the quality of hash coding by exploiting the semantic similarity on data pairs, has received increasing attention recently. For most existing supervised hashing methods for image retrieval, an image is first represented as a vector of hand-crafted or machine-learned features, followed by another separate quantization step that generates binary codes. However, suboptimal hash coding may be produced, because the quantization error is not statistically minimized and the feature representation is not optimally compatible with the binary coding. In this paper, we propose a novel Deep Hashing Network (DHN) architecture for supervised hashing, in which we jointly learn good image representation tailored to hash coding and formally control the quantization error. The DHN model constitutes four key components: (1) a subnetwork with multiple convolution-pooling layers to capture image representations; (2) a fully-connected hashing layer to generate compact binary hash codes; (3) a pairwise crossentropy loss layer for similarity-preserving learning; and (4) a pairwise quantization loss for controlling hashing quality. Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods.", "title": "" }, { "docid": "8c3ecd27a695fef2d009bbf627820a0d", "text": "This paper presents a novel attention mechanism to improve stereo-vision based object recognition systems in terms of recognition performance and computational efficiency at the same time. We utilize the Stixel World, a compact medium-level 3D representation of the local environment, as an early focus-of-attention stage for subsequent system modules. In particular, the search space of computationally expensive pattern classifiers is significantly narrowed down. We explicitly couple the 3D Stixel representation with prior knowledge about the object class of interest, i.e. 3D geometry and symmetry, to precisely focus processing on well-defined local regions that are consistent with the environment model. Experiments are conducted on large real-world datasets captured from a moving vehicle in urban traffic. In case of vehicle recognition as an experimental testbed, we demonstrate that the proposed Stixel-based attention mechanism significantly reduces false positive rates at constant sensitivity levels by up to a factor of 8 over state-of-the-art. At the same time, computational costs are reduced by more than an order of magnitude.", "title": "" }, { "docid": "080d4d757747be3a28923f9f7eb7e82e", "text": "Online social networking offers a new, easy and inexpensive way to maintain already existing relationships and present oneself to others. However, the increasing number of actions in online services also gives a rise to privacy concerns and risks. In an attempt to understand the factors, especially privacy awareness, that influence users to disclose or protect information in online environment, we view privacy behavior from the perspectives of privacy protection and information disclosing. In our empirical study, we present results from a survey of 210 users of Facebook. Our results indicate, that most of our respondents, who seem to be active users of Facebook, disclose a considerable amount of private information. Contrary to their own belief, they are not too well aware of the visibility of their information to people they do not necessarily know. Furthermore, Facebook’s privacy policy and the terms of use were largely not known or understood by our respondents.", "title": "" }, { "docid": "db252efe7bde6cc0d58e337f8ad04271", "text": "Social skills training is a well-established method to decrease human anxiety and discomfort in social interaction, and acquire social skills. In this paper, we attempt to automate the process of social skills training by developing a dialogue system named \"automated social skills trainer,\" which provides social skills training through human-computer interaction. The system includes a virtual avatar that recognizes user speech and language information and gives feedback to users to improve their social skills. Its design is based on conventional social skills training performed by human participants, including defining target skills, modeling, role-play, feedback, reinforcement, and homework. An experimental evaluation measuring the relationship between social skill and speech and language features shows that these features have a relationship with autistic traits. Additional experiments measuring the effect of performing social skills training with the proposed application show that most participants improve their skill by using the system for 50 minutes.", "title": "" }, { "docid": "36f6f21ff8619ef89900cc0de7ff1a1d", "text": "Human being is the most intelligent animal in this world. Intuitively, optimization algorithm inspired by human being creative problem solving process should be superior to the optimization algorithms inspired by collective behavior of insects like ants, bee, etc. In this paper, we introduce a novel brain storm optimization algorithm, which was inspired by the human brainstorming process. Two benchmark functions were tested to validate the effectiveness and usefulness of the proposed algorithm.", "title": "" }, { "docid": "8cb57ce5513db12aee216d569d1d1ed4", "text": "From the University of California School of Medicine, San Francisco. Address reprint requests to Dr. Lue at the Department of Urology, U-575, University of California, San Francisco, CA 94143-0738, or at tlue@urol. ucsf.edu. ©2000, Massachusetts Medical Society. RECTILE dysfunction is defined as the inability to achieve and maintain an erection sufficient to permit satisfactory sexual intercourse. 1", "title": "" }, { "docid": "717e5a5b6026d42e7379d8e2c0c7ff45", "text": "In this paper, a color image segmentation approach based on homogram thresholding and region merging is presented. The homogram considers both the occurrence of the gray levels and the neighboring homogeneity value among pixels. Therefore, it employs both the local and global information. Fuzzy entropy is utilized as a tool to perform homogram analysis for nding all major homogeneous regions at the rst stage. Then region merging process is carried out based on color similarity among these regions to avoid oversegmentation. The proposed homogram-based approach (HOB) is compared with the histogram-based approach (HIB). The experimental results demonstrate that the HOB can nd homogeneous regions more eeectively than HIB does, and can solve the problem of discriminating shading in color images to some extent.", "title": "" }, { "docid": "c8a9f196dc2a954f945d9171976b11e4", "text": "Many important learning tasks feel uninteresting and tedious to learners. This research proposed that promoting a prosocial, self-transcendent purpose could improve academic self-regulation on such tasks. This proposal was supported in 4 studies with over 2,000 adolescents and young adults. Study 1 documented a correlation between a self-transcendent purpose for learning and self-reported trait measures of academic self-regulation. Those with more of a purpose for learning also persisted longer on a boring task rather than giving in to a tempting alternative and, many months later, were less likely to drop out of college. Study 2 addressed causality. It showed that a brief, one-time psychological intervention promoting a self-transcendent purpose for learning could improve high school science and math grade point average (GPA) over several months. Studies 3 and 4 were short-term experiments that explored possible mechanisms. They showed that the self-transcendent purpose manipulation could increase deeper learning behavior on tedious test review materials (Study 3), and sustain self-regulation over the course of an increasingly boring task (Study 4). More self-oriented motives for learning--such as the desire to have an interesting or enjoyable career--did not, on their own, consistently produce these benefits (Studies 1 and 4).", "title": "" }, { "docid": "7d01463ce6dd7e7e08ebaf64f6916b1d", "text": "An effective location algorithm, which considers nonline-of-sight (NLOS) propagation, is presented. By using a new variable to replace the square term, the problem becomes a mathematical programming problem, and then the NLOS propagation’s effect can be evaluated. Compared with other methods, the proposed algorithm has high accuracy.", "title": "" }, { "docid": "b54abd40f41235fa8e8cd4e9f42cd777", "text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1f8b3933dc49d87204ba934f82f2f84f", "text": "While journalism is evolving toward a rather open-minded participatory paradigm, social media presents overwhelming streams of data that make it difficult to identify the information of a journalist's interest. Given the increasing interest of journalists in broadening and democratizing news by incorporating social media sources, we have developed TweetGathering, a prototype tool that provides curated and contextualized access to news stories on Twitter. This tool was built with the aim of assisting journalists both with gathering and with researching news stories as users comment on them. Five journalism professionals who tested the tool found helpful characteristics that could assist them with gathering additional facts on breaking news, as well as facilitating discovery of potential information sources such as witnesses in the geographical locations of news.", "title": "" }, { "docid": "3534e4321560c826057e02c52d4915dd", "text": "While hexahedral mesh elements are preferred by a variety of simulation techniques, constructing quality all-hex meshes of general shapes remains a challenge. An attractive hex-meshing approach, often referred to as submapping, uses a low distortion mapping between the input model and a PolyCube (a solid formed from a union of cubes), to transfer a regular hex grid from the PolyCube to the input model. Unfortunately, the construction of suitable PolyCubes and corresponding volumetric maps for arbitrary shapes remains an open problem. Our work introduces a new method for computing low-distortion volumetric PolyCube deformations of general shapes and for subsequent all-hex remeshing. For a given input model, our method simultaneously generates an appropriate PolyCube structure and mapping between the input model and the PolyCube. From these we automatically generate good quality all-hex meshes of complex natural and man-made shapes.", "title": "" }, { "docid": "ec6159b0256c6df7fdbfd5bb6cfd9256", "text": "This paper proposes a new hybrid permanent magnet (PM)-assisted synchronous reluctance motor, where two types of PM materials of rare-earth PMs and ferrite PMs are employed in its rotor. To reduce the usage of rare-earth materials, a hierarchical design method is adopted in its rotor design, in which the design is divided into two levels: saliency ratio design level and PM usage design level. In saliency ratio design level, proper flux barrier dimensions are confirmed based on the principle of reluctance torque maximization. In PM usage design level, the optimal magnet ratio is first determined according to the relationship of main flux and leakage flux. Then, a tradeoff design is conducted to seek the superior combination of low torque ripple, high efficiency, and high power factor. Finally, for the purpose of validation, the electromagnetic performances of the new designed motor are investigated, including torque characteristics and antidemagnetization capabilities of ferrite.", "title": "" }, { "docid": "1c915d0ffe515aa2a7c52300d86e90ba", "text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.", "title": "" }, { "docid": "7c8776729f9e734133d5d09483080435", "text": "We consider the problem of mitigating a highly varying wireless channel between a transmitting ground node and receivers on a small, low-altitude unmanned aerial vehicle (UAV) in a 802.11 wireless mesh network. One approach is to use multiple transmitter and receiver nodes that exploit the channel's spatial/temporal diversity and that cooperate to improve overall packet reception. We present a series of measurement results from a real-world testbed that characterize the resulting wireless channel. We show that the correlation between receiver nodes on the airplane is poor at small time scales so receiver diversity can be exploited. Our measurements suggest that using several receiver nodes simultaneously can boost packet delivery rates substantially. Lastly, we show that similar results apply to transmitter selection diversity as well.", "title": "" }, { "docid": "36bf7c66b222006e1c286450595be824", "text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamic activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.", "title": "" }, { "docid": "0ea07af19fc199f6a9909bd7df0576a1", "text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.", "title": "" }, { "docid": "80576ea7e8c52465cec9094990bf7243", "text": "Nowadays, classifying sentiment from social media has been a strategic thing since people can express their feeling about something in an easy way and short text. Mining opinion from social media has become important because people are usually honest with their feeling on something. In our research, we tried to identify the problems of classifying sentiment from Indonesian social media. We identified that people tend to express their opinion in text while the emoticon is rarely used and sometimes misleading. We also identified that the Indonesian social media opinion can be classified not only to positive, negative, neutral and question but also to a special mix case between negative and question type. Basically there are two levels of problem: word level and sentence level. Word level problems include the usage of punctuation mark, the number usage to replace letter, misspelled word and the usage of nonstandard abbreviation. In sentence level, the problem is related with the sentiment type such as mentioned before. In our research, we built a sentiment classification system which includes several steps such as text preprocessing, feature extraction, and classification. The text preprocessing aims to transform the informal text into formal text. The word formalization method in that we use is the deletion of punctuation mark, the tokenization, conversion of number to letter, the reduction of repetition letter, and using corpus with Levensthein to formalize abbreviation. The sentence formalization method that we use is negation handling, sentiment relative, and affixes handling. Rule-based, SVM and Maximum Entropy are used as the classification algorithms with features of count of positive, negative, and question word in sentence and bigram. From our experimental result, the best classification method is SVM that yields 83.5% accuracy.", "title": "" }, { "docid": "d35dc7e653dbe5dca7e1238ea8ced0a5", "text": "Temperature-aware computing is becoming more important in design of computer systems as power densities are increasing and the implications of high operating temperatures result in higher failure rates of components and increased demand for cooling capability. Computer architects and system software designers need to understand the thermal consequences of their proposals, and develop techniques to lower operating temperatures to reduce both transient and permanent component failures. Recognizing the need for thermal modeling tools to support those researches, there has been work on modeling temperatures of processors at the micro-architectural level which can be easily understood and employed by computer architects for processor designs. However, there is a dearth of such tools in the academic/research community for undertaking architectural/systems studies beyond a processor - a server box, rack or even a machine room. In this paper we presents a detailed 3-dimensional computational fluid dynamics based thermal modeling tool, called ThermoStat, for rack-mounted server systems. We conduct several experiments with this tool to show how different load conditions affect the thermal profile, and also illustrate how this tool can help design dynamic thermal management techniques. We propose reactive and proactive thermal management for rack mounted server and isothermal workload distribution for rack.", "title": "" } ]
scidocsrr
55c57dfb6f70f798bc2bff0c025f17ed
Interference Reduction in Multi-Cell Massive MIMO Systems I: Large-Scale Fading Precoding and Decoding
[ { "docid": "d14a60ee9a51e52ec00cf25729193568", "text": "Time-Division Duplexing (TDD) allows to estimate the downlink channels for an arbitrarily large number of base station antennas from a finite number of orthogonal uplink pilot signals, by exploiting channel reciprocity. Based on this observation, a recently proposed \"Massive MIMO\" scheme was shown to achieve unprecedented spectral efficiency in realistic conditions of distance-dependent pathloss and channel coherence time and bandwidth. The main focus and contribution of this paper is an improved Network-MIMO TDD architecture achieving spectral efficiencies comparable with \"Massive MIMO\", with one order of magnitude fewer antennas per active user per cell (roughly, from 500 to 50 antennas). The proposed architecture is based on a family of Network-MIMO schemes defined by small clusters of cooperating base stations, zero-forcing multiuser MIMO precoding with suitable inter-cluster interference mitigation constraints, uplink pilot signals allocation and frequency reuse across cells. The key idea consists of partitioning the users into equivalence classes, optimizing the Network-MIMO scheme for each equivalence class, and letting a scheduler allocate the channel time-frequency dimensions to the different classes in order to maximize a suitable network utility function that captures a desired notion of fairness. This results in a mixed-mode Network-MIMO architecture, where different schemes, each of which is optimized for the served user equivalence class, are multiplexed in time-frequency. In order to carry out the performance analysis and the optimization of the proposed architecture in a systematic and computationally efficient way, we consider the large-system regime where the number of users, the number of antennas, and the channel coherence block length go to infinity with fixed ratios.", "title": "" } ]
[ { "docid": "22c9f931198f054e7994e7f1db89a194", "text": "Learning a good distance metric plays a vital role in many multimedia retrieval and data mining tasks. For example, a typical content-based image retrieval (CBIR) system often relies on an effective distance metric to measure similarity between any two images. Conventional CBIR systems simply adopting Euclidean distance metric often fail to return satisfactory results mainly due to the well-known semantic gap challenge. In this article, we present a novel framework of Semi-Supervised Distance Metric Learning for learning effective distance metrics by exploring the historical relevance feedback log data of a CBIR system and utilizing unlabeled data when log data are limited and noisy. We formally formulate the learning problem into a convex optimization task and then present a new technique, named as “Laplacian Regularized Metric Learning” (LRML). Two efficient algorithms are then proposed to solve the LRML task. Further, we apply the proposed technique to two applications. One direct application is for Collaborative Image Retrieval (CIR), which aims to explore the CBIR log data for improving the retrieval performance of CBIR systems. The other application is for Collaborative Image Clustering (CIC), which aims to explore the CBIR log data for enhancing the clustering performance of image pattern clustering tasks. We conduct extensive evaluation to compare the proposed LRML method with a number of competing methods, including 2 standard metrics, 3 unsupervised metrics, and 4 supervised metrics with side information. Encouraging results validate the effectiveness of the proposed technique.", "title": "" }, { "docid": "877e7654a4e42ab270a96e87d32164fd", "text": "The presence of gender stereotypes in many aspects of society is a well-known phenomenon. In this paper, we focus on studying such stereotypes and bias in Hindi movie industry (Bollywood). We analyze movie plots and posters for all movies released since 1970. The gender bias is detected by semantic modeling of plots at inter-sentence and intrasentence level. Different features like occupation, introduction of cast in text, associated actions and descriptions are captured to show the pervasiveness of gender bias and stereotype in movies. We derive a semantic graph and compute centrality of each character and observe similar bias there. We also show that such bias is not applicable for movie posters where females get equal importance even though their character has little or no impact on the movie plot. Furthermore, we explore the movie trailers to estimate on-screen time for males and females and also study the portrayal of emotions by gender in them. The silver lining is that our system was able to identify 30 movies over last 3 years where such stereotypes were broken.", "title": "" }, { "docid": "1af7a41e5cac72ed9245b435c463b366", "text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.", "title": "" }, { "docid": "a72c9cd8bdf4aec0d265dd4a5fff2826", "text": "We propose a robust quantization-based image watermarking scheme, called the gradient direction watermarking (GDWM), based on the uniform quantization of the direction of gradient vectors. In GDWM, the watermark bits are embedded by quantizing the angles of significant gradient vectors at multiple wavelet scales. The proposed scheme has the following advantages: 1) increased invisibility of the embedded watermark because the watermark is embedded in significant gradient vectors, 2) robustness to amplitude scaling attacks because the watermark is embedded in the angles of the gradient vectors, and 3) increased watermarking capacity as the scheme uses multiple-scale embedding. The gradient vector at a pixel is expressed in terms of the discrete wavelet transform (DWT) coefficients. To quantize the gradient direction, the DWT coefficients are modified based on the derived relationship between the changes in the coefficients and the change in the gradient direction. Experimental results show that the proposed GDWM outperforms other watermarking methods and is robust to a wide range of attacks, e.g., Gaussian filtering, amplitude scaling, median filtering, sharpening, JPEG compression, Gaussian noise, salt & pepper noise, and scaling.", "title": "" }, { "docid": "2ec6cb6ae25384cacc7bd8213002a58b", "text": "Food packaging has evolved from simply a container to hold food to something today that can play an active role in food quality. Many packages are still simply containers, but they have properties that have been developed to protect the food. These include barriers to oxygen, moisture, and flavors. Active packaging, or that which plays an active role in food quality, includes some microwave packaging as well as packaging that has absorbers built in to remove oxygen from the atmosphere surrounding the product or to provide antimicrobials to the surface of the food. Packaging has allowed access to many foods year-round that otherwise could not be preserved. It is interesting to note that some packages have actually allowed the creation of new categories in the supermarket. Examples include microwave popcorn and fresh-cut produce, which owe their existence to the unique packaging that has been developed.", "title": "" }, { "docid": "0c2a2cb741d1d22c5ef3eabd0b525d8d", "text": "Part-of-speech (POS) tagging is a process of assigning the words in a text corresponding to a particular part of speech. A fundamental version of POS tagging is the identification of words as nouns, verbs, adjectives etc. For processing natural languages, Part of Speech tagging is a prominent tool. It is one of the simplest as well as most constant and statistical model for many NLP applications. POS Tagging is an initial stage of linguistics, text analysis like information retrieval, machine translator, text to speech synthesis, information extraction etc. In POS Tagging we assign a Part of Speech tag to each word in a sentence and literature. Various approaches have been proposed to implement POS taggers. In this paper we present a Marathi part of speech tagger. It is morphologically rich language. Marathi is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear idea about all the algorithms with suitable examples. It also introduces a tag set for Marathi which can be used for tagging Marathi text. In this paper we have shown the development of the tagger as well as compared to check the accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram, Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82% respectively.", "title": "" }, { "docid": "4284e9bbe3bf4c50f9e37455f1118e6b", "text": "A longevity revolution (Butler, 2008) is occurring across the globe. Because of factors ranging from the reduction of early-age mortality to an increase in life expectancy at later ages, most of the world’s population is now living longer than preceding generations (Bengtson, 2014). There are currently more than 44 million older adults—typically defined as persons 65 years and older—living in the United States, and this number is expected to increase to 98 million by 2060 (Administration on Aging, 2016). Although most older adults report higher levels of life satisfaction than do younger or middle-aged adults (George, 2010), between 5.6 and 8 million older Americans have a diagnosable mental health or substance use disorder (Bartels & Naslund, 2013). Furthermore, because of the rapid growth of the older adult population, this figure is expected to nearly double by 2030 (Bartels & Naslund, 2013). Mental health care is effective for older adults, and evidence-based treatments exist to address a broad range of issues, including anxiety disorders, depression, sleep disturbances, substance abuse, and some symptoms of dementia (Myers & Harper, 2004). Counseling interventions may also be beneficial for nonclinical life transitions, such as coping with loss, adjusting to retirement and a reduced income, and becoming a grandparent (Myers & Harper, 2004). Yet, older adults are underserved when it comes to mental", "title": "" }, { "docid": "987f221f99b1638bb5bf0542dbc98c3f", "text": "Pain, whether caused by physical injury or social rejection, is an inevitable part of life. These two types of pain-physical and social-may rely on some of the same behavioral and neural mechanisms that register pain-related affect. To the extent that these pain processes overlap, acetaminophen, a physical pain suppressant that acts through central (rather than peripheral) neural mechanisms, may also reduce behavioral and neural responses to social rejection. In two experiments, participants took acetaminophen or placebo daily for 3 weeks. Doses of acetaminophen reduced reports of social pain on a daily basis (Experiment 1). We used functional magnetic resonance imaging to measure participants' brain activity (Experiment 2), and found that acetaminophen reduced neural responses to social rejection in brain regions previously associated with distress caused by social pain and the affective component of physical pain (dorsal anterior cingulate cortex, anterior insula). Thus, acetaminophen reduces behavioral and neural responses associated with the pain of social rejection, demonstrating substantial overlap between social and physical pain.", "title": "" }, { "docid": "70374e96446dcc65a0f5fa64e439a472", "text": "Electric Vehicles (EVs) are projected as the most sustainable solutions for future transportation. EVs have many advantages over conventional hydrocarbon internal combustion engines including energy efficiency, environmental friendliness, noiselessness and less dependence on fossil fuels. However, there are also many challenges which are mainly related to the battery pack, such as battery cost, driving range, reliability, safety, battery capacity, cycle life, and recharge time. The performance of EVs is greatly dependent on the battery pack. Temperatures of the cells in a battery pack need to be maintained within its optimum operating temperature range in order to achieve maximum performance, safety and reliability under various operating conditions. Poor thermal management will affect the charging and discharging power, cycle life, cell balancing, capacity and fast charging capability of the battery pack. Hence, a thermal management system is needed in order to enhance the performance and to extend the life cycle of the battery pack. In this study, the effects of temperature on the Li-ion battery are investigated. Heat generated by LiFePO4 pouch cell was characterized using an EV accelerating rate calorimeter. Computational fluid dynamic analyses were carried out to investigate the performance of a liquid cooling system for a battery pack. The numerical simulations showed promising results and the design of the battery pack thermal management system was sufficient to ensure that the cells operated within their temperature limits.", "title": "" }, { "docid": "f372bc2ed27f5d4c08087ddc46e5373e", "text": "This work investigates the practice of credit scoring and introduces the use of the clustered support vector machine (CSVM) for credit scorecard development. This recently designed algorithm addresses some of the limitations noted in the literature that is associated with traditional nonlinear support vector machine (SVM) based methods for classification. Specifically, it is well known that as historical credit scoring datasets get large, these nonlinear approaches while highly accurate become computationally expensive. Accordingly, this study compares the CSVM with other nonlinear SVM based techniques and shows that the CSVM can achieve comparable levels of classification performance while remaining relatively cheap computationally. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "73a62915c29942d2fac0570cac7eb3e0", "text": "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "title": "" }, { "docid": "3f679dbd9047040d63da70fc9e977a99", "text": "In this paper we consider videos (e.g. Hollywood movies) and their accompanying natural language descriptions in the form of narrative sentences (e.g. movie scripts without timestamps). We propose a method for temporally aligning the video frames with the sentences using both visual and textual information, which provides automatic timestamps for each narrative sentence. We compute the similarity between both types of information using vectorial descriptors and propose to cast this alignment task as a matching problem that we solve via dynamic programming. Our approach is simple to implement, highly efficient and does not require the presence of frequent dialogues, subtitles, and character face recognition. Experiments on various movies demonstrate that our method can successfully align the movie script sentences with the video frames of movies.", "title": "" }, { "docid": "ce1c2217536fe62ea0f17167415b581c", "text": "Generative Adversarial Networks (GANs) have shown great capacity on image generation, in which a discriminative model guides the training of a generative model to construct images that resemble real images. Recently, GANs have been extended from generating images to generating sequences (e.g., poems, music and codes). Existing GANs on sequence generation mainly focus on general sequences, which are grammar-free. In many real-world applications, however, we need to generate sequences in a formal language with the constraint of its corresponding grammar. For example, to test the performance of a database, one may want to generate a collection of SQL queries, which are not only similar to the queries of real users, but also follow the SQL syntax of the target database. Generating such sequences is highly challenging because both the generator and discriminator of GANs need to consider the structure of the sequences and the given grammar in the formal language. To address these issues, we study the problem of syntax-aware sequence generation with GANs, in which a collection of real sequences and a set of pre-defined grammatical rules are given to both discriminator and generator. We propose a novel GAN framework, namely TreeGAN, to incorporate a given Context-Free Grammar (CFG) into the sequence generation process. In TreeGAN, the generator employs a recurrent neural network (RNN) to construct a parse tree. Each generated parse tree can then be translated to a valid sequence of the given grammar. The discriminator uses a tree-structured RNN to distinguish the generated trees from real trees. We show that TreeGAN can generate sequences for any CFG and its generation fully conforms with the given syntax. Experiments on synthetic and real data sets demonstrated that TreeGAN significantly improves the quality of the sequence generation in context-free languages.", "title": "" }, { "docid": "d771693809e966adc3656f58855fdda0", "text": "A wide variety of crystalline nanowires (NWs) with outstanding mechanical properties have recently emerged. Measuring their mechanical properties and understanding their deformation mechanisms are of important relevance to many of their device applications. On the other hand, such crystalline NWs can provide an unprecedented platform for probing mechanics at the nanoscale. While challenging, the field of experimental mechanics of crystalline nanowires has emerged and seen exciting progress in the past decade. This review summarizes recent advances in this field, focusing on major experimental methods using atomic force microscope (AFM) and electron microscopes and key results on mechanics of crystalline nanowires learned from such experimental studies. Advances in several selected topics are discussed including elasticity, fracture, plasticity, and anelasticity. Finally, this review surveys some applications of crystalline nanowires such as flexible and stretchable electronics, nanocomposites, nanoelectromechanical systems (NEMS), energy harvesting and storage, and strain engineering, where mechanics plays a key role. [DOI: 10.1115/1.4035511]", "title": "" }, { "docid": "368670b67f79d404d10b9226b860eeb5", "text": "Parkinson disease (PD) is a complex neurodegenerative disorder with both motor and nonmotor symptoms owing to a spreading process of neuronal loss in the brain. At present, only symptomatic treatment exists and nothing can be done to halt the degenerative process, as its cause remains unclear. Risk factors such as aging, genetic susceptibility, and environmental factors all play a role in the onset of the pathogenic process but how these interlink to cause neuronal loss is not known. There have been major advances in the understanding of mechanisms that contribute to nigral dopaminergic cell death, including mitochondrial dysfunction, oxidative stress, altered protein handling, and inflammation. However, it is not known if the same processes are responsible for neuronal loss in nondopaminergic brain regions. Many of the known mechanisms of cell death are mirrored in toxin-based models of PD, but neuronal loss is rapid and not progressive and limited to dopaminergic cells, and drugs that protect against toxin-induced cell death have not translated into neuroprotective therapies in humans. Gene mutations identified in rare familial forms of PD encode proteins whose functions overlap widely with the known molecular pathways in sporadic disease and these have again expanded our knowledge of the neurodegenerative process but again have so far failed to yield effective models of sporadic disease when translated into animals. We seem to be missing some key parts of the jigsaw, the trigger event starting many years earlier in the disease process, and what we are looking at now is merely part of a downstream process that is the end stage of neuronal death.", "title": "" }, { "docid": "c8f39a710ca3362a4d892879f371b318", "text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.", "title": "" }, { "docid": "25c2bab5bd1d541629c23bb6a929f968", "text": "A novel transition from coaxial cable to microstrip is presented in which the coax connector is perpendicular to the substrate of the printed circuit. Such a right-angle transition has practical advantages over more common end-launch geometries in some situations. The design is compact, easy to fabricate, and provides repeatable performance of better than 14 dB return loss and 0.4 dB insertion loss from DC to 40 GHz.", "title": "" }, { "docid": "b88ceafe9998671820291773be77cabc", "text": "The aim of this study was to propose a set of network methods to measure the specific properties of a team. These metrics were organised at macro-analysis levels. The interactions between teammates were collected and then processed following the analysis levels herein announced. Overall, 577 offensive plays were analysed from five matches. The network density showed an ambiguous relationship among the team, mainly during the 2nd half. The mean values of density for all matches were 0.48 in the 1st half, 0.32 in the 2nd half and 0.34 for the whole match. The heterogeneity coefficient for the overall matches rounded to 0.47 and it was also observed that this increased in all matches in the 2nd half. The centralisation values showed that there was no 'star topology'. The results suggest that each node (i.e., each player) had nearly the same connectivity, mainly in the 1st half. Nevertheless, the values increased in the 2nd half, showing a decreasing participation of all players at the same level. Briefly, these metrics showed that it is possible to identify how players connect with each other and the kind and strength of the connections between them. In summary, it may be concluded that network metrics can be a powerful tool to help coaches understand team's specific properties and support decision-making to improve the sports training process based on match analysis.", "title": "" }, { "docid": "cc08e377d924f86fb6ceace022ad8db2", "text": "Homomorphic cryptography has been one of the most interesting topics of mathematics and computer security since Gentry presented the first construction of a fully homomorphic encryption (FHE) scheme in 2009. Since then, a number of different schemes have been found, that follow the approach of bootstrapping a fully homomorphic scheme from a somewhat homomorphic foundation. All existing implementations of these systems clearly proved, that fully homomorphic encryption is not yet practical, due to significant performance limitations. However, there are many applications in the area of secure methods for cloud computing, distributed computing and delegation of computation in general, that can be implemented with homomorphic encryption schemes of limited depth. We discuss a simple algebraically homomorphic scheme over the integers that is based on the factorization of an approximate semiprime integer. We analyze the properties of the scheme and provide a couple of known protocols that can be implemented with it. We also provide a detailed discussion on searching with encrypted search terms and present implementations and performance figures for the solutions discussed in this paper.", "title": "" } ]
scidocsrr
6cb53c711ed03317425017894be7ea47
Industrie 4.0: Enabling technologies
[ { "docid": "292fb39474de4ecaac282229fe9f050e", "text": "The widespread proliferation of handheld devices enables mobile carriers to be connected at anytime and anywhere. Meanwhile, the mobility patterns of mobile devices strongly depend on the users' movements, which are closely related to their social relationships and behaviors. Consequently, today's mobile networks are becoming increasingly human centric. This leads to the emergence of a new field which we call socially aware networking (SAN). One of the major features of SAN is that social awareness becomes indispensable information for the design of networking solutions. This emerging paradigm is applicable to various types of networks (e.g., opportunistic networks, mobile social networks, delay-tolerant networks, ad hoc networks, etc.) where the users have social relationships and interactions. By exploiting social properties of nodes, SAN can provide better networking support to innovative applications and services. In addition, it facilitates the convergence of human society and cyber-physical systems. In this paper, for the first time, to the best of our knowledge, we present a survey of this emerging field. Basic concepts of SAN are introduced. We intend to generalize the widely used social properties in this regard. The state-of-the-art research on SAN is reviewed with focus on three aspects: routing and forwarding, incentive mechanisms, and data dissemination. Some important open issues with respect to mobile social sensing and learning, privacy, node selfishness, and scalability are discussed.", "title": "" } ]
[ { "docid": "7d1348ad0dbd8f33373e556009d4f83a", "text": "Laryngeal neoplasms represent 2% of all human cancers. They befall mainly the male sex, especially between 50 and 70 years of age, but exceptionally may occur in infancy or extreme old age. Their occurrence has increased considerably inclusively due to progressive population again. The present work aims at establishing a relation between this infirmity and its prognosis in patients submitted to the treatment recommended by Departament of Otolaryngology and Head Neck Surgery of the School of Medicine of São José do Rio Preto. To this effect, by means of karyometric optical microscopy, cell nuclei in the glottic region of 20 individuals, divided into groups according to their tumor stage and time of survival, were evaluated. Following comparation with a control group and statistical analsis, it became possible to verify that the lesser diameter of nuclei is of prognostic value for initial tumors in this region.", "title": "" }, { "docid": "c61c5831c282c4db3308345aace744d7", "text": "The present study examined the associations among participant demographics, personality factors, love dimensions, and relationship length. In total, 16,030 participants completed an internet survey assessing Big Five personality factors, Sternberg's three love dimensions (intimacy, passion, and commitment), and the length of time that they had been involved in a relationship. Results of structural equation modeling (SEM) showed that participant age was negatively associated with passion and positively associated with intimacy and commitment. In addition, the Big Five factor of Agreeableness was positively associated with all three love dimensions, whereas Conscientiousness was positively associated with intimacy and commitment. Finally, passion was negatively associated with relationship length, whereas commitment was positively correlated with relationship length. SEM results further showed that there were minor differences in these associations for women and men. Given the large sample size, our results reflect stable associations between personality factors and love dimensions. The present results may have important implications for relationship and marital counseling. Limitations of this study and further implications are discussed.", "title": "" }, { "docid": "121fc3a009e8ce2938f822ba437bdaa3", "text": "Due to an increased awareness and significant environmental pressures from various stakeholders, companies have begun to realize the significance of incorporating green practices into their daily activities. This paper proposes a framework using Fuzzy TOPSIS to select green suppliers for a Brazilian electronics company; our framework is built on the criteria of green supply chain management (GSCM) practices. An empirical analysis is made, and the data are collected from a set of 12 available suppliers. We use a fuzzy TOPSIS approach to rank the suppliers, and the results of the proposed framework are compared with the ranks obtained by both the geometric mean and the graded mean methods of fuzzy TOPSIS methodology. Then a Spearman rank correlation coefficient is used to find the statistical difference between the ranks obtained by the three methods. Finally, a sensitivity analysis has been performed to examine the influence of the preferences given by the decision makers for the chosen GSCM practices on the selection of green suppliers. Results indicate that the four dominant criteria are Commitment of senior management to GSCM; Product designs that reduce, reuse, recycle, or reclaim materials, components, or energy; Compliance with legal environmental requirements and auditing programs; and Product designs that avoid or reduce toxic or hazardous material use. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b11df93138d95bdfb3d50b013d4ecccc", "text": "LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs) for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem) or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.", "title": "" }, { "docid": "8142bb9e734574f251fa548a817f7f52", "text": "The chain of delay elements creating delay lines are the basic building blocks of delay locked loops (DLLs) applied in clock distribution network in many VLSI circuits and systems. In the paper Current Controlled delay line (CCDL) elements with Duty Cycle Correction (DCC) has been described and investigated. The architecture of these elements is based on Switched-Current Mirror Inverter (SCMI) and CMOS standard or Schmitt type inverters. The primary characteristics of the described CCDL element have been compared with characteristics of two most popular ones: current starved, and shunt capacitor delay elements. The simulation results with real foundry parameters models in 180 nm, 1.8 V CMOS technology from UMC are also included. Simulations have been done using BSIM3V3 device models for Spectre from Cadence Design Systems.", "title": "" }, { "docid": "11afe3e3e94ca2ec411f38bf1b0b2e82", "text": "The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.", "title": "" }, { "docid": "d7d1da1632553a0ac5c0961c8cf9b5ac", "text": "In this paper a monitoring system for production well based on WSN is designed, where the sensors can be used as the downhole permanent sensor to measure temperature and pressure analog signals. The analog signals are modulated digital signals by data acquisition system. The digital signals are transmitted to database server of monitoring center. Meanwhile the data can be browsed on internet or by mobile telephone, and the consumer receive alarm message when the data are overflow. The system offered manager and technician credible gist to make decision timely.", "title": "" }, { "docid": "c8ef46debc31d9d7013169cdf1403542", "text": "BACKGROUND\nThis paper reports the results of a pilot randomized controlled trial comparing the delivery modality (mobile phone/tablet or fixed computer) of a cognitive behavioural therapy intervention for the treatment of depression. The aim was to establish whether a previously validated computerized program (The Sadness Program) remained efficacious when delivered via a mobile application.\n\n\nMETHOD\n35 participants were recruited with Major Depression (80% female) and randomly allocated to access the program using a mobile app (on either a mobile phone or iPad) or a computer. Participants completed 6 lessons, weekly homework assignments, and received weekly email contact from a clinical psychologist or psychiatrist until completion of lesson 2. After lesson 2 email contact was only provided in response to participant request, or in response to a deterioration in psychological distress scores. The primary outcome measure was the Patient Health Questionnaire 9 (PHQ-9). Of the 35 participants recruited, 68.6% completed 6 lessons and 65.7% completed the 3-months follow up. Attrition was handled using mixed-model repeated-measures ANOVA.\n\n\nRESULTS\nBoth the Mobile and Computer Groups were associated with statistically significantly benefits in the PHQ-9 at post-test. At 3 months follow up, the reduction seen for both groups remained significant.\n\n\nCONCLUSIONS\nThese results provide evidence to indicate that delivering a CBT program using a mobile application, can result in clinically significant improvements in outcomes for patients with depression.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry ACTRN 12611001257954.", "title": "" }, { "docid": "df044b996752beb7f0fd067d17c91199", "text": "We introduce lemonUby, a new lexical resource integrated in the Semantic Web which is the result of converting data extracted from the existing large-scale linked lexical resource UBY to the lemon lexicon model. The following data from UBY were converted: WordNet, FrameNet, VerbNet, English and German Wiktionary, the English and German entries of OmegaWiki, as well as links between pairs of these lexicons at the word sense level (links between VerbNet and FrameNet, VerbNet and WordNet, WordNet and FrameNet, WordNet and Wiktionary, WordNet and German OmegaWiki). We linked lemonUby to other lexical resources and linguistic terminology repositories in the Linguistic Linked Open Data cloud and outline possible applications of this new dataset.", "title": "" }, { "docid": "9911063e58b5c2406afd761d8826538a", "text": "BACKGROUND\nThe purpose of our study was to evaluate inter-observer reliability of the Three-Column classifications with conventional Schatzker and AO/OTA of Tibial Plateau Fractures.\n\n\nMETHODS\n50 cases involving all kinds of the fracture patterns were collected from 278 consecutive patients with tibial plateau fractures who were internal fixed in department of Orthopedics and Trauma III in Shanghai Sixth People's Hospital. The series were arranged randomly, numbered 1 to 50. Four observers were chosen to classify these cases. Before the research, a classification training session was held to each observer. They were given as much time as they required evaluating the radiographs accurately and independently. The classification choices made at the first viewing were not available during the second viewing. The observers were not provided with any feedback after the first viewing. The kappa statistic was used to analyze the inter-observer reliability of the three fracture classification made by the four observers.\n\n\nRESULTS\nThe mean kappa values for inter-observer reliability regarding Schatzker classification was 0.567 (range: 0.513-0.589), representing \"moderate agreement\". The mean kappa values for inter-observer reliability regarding AO/ASIF classification systems was 0.623 (range: 0.510-0.710) representing \"substantial agreement\". The mean kappa values for inter-observer reliability regarding Three-Column classification systems was 0.766 (range: 0.706-0.890), representing \"substantial agreement\".\n\n\nCONCLUSION\nThree-Column classification, which is dependent on the understanding of the fractures using CT scans as well as the 3D reconstruction can identity the posterior column fracture or fragment. It showed \"substantial agreement\" in the assessment of inter-observer reliability, higher than the conventional Schatzker and AO/OTA classifications. We finally conclude that Three-Column classification provides a higher agreement among different surgeons and could be popularized and widely practiced in other clinical centers.", "title": "" }, { "docid": "474572cef9f1beb875d3ae012e06160f", "text": "Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.", "title": "" }, { "docid": "bfc85b95287e4abc2308849294384d1e", "text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.", "title": "" }, { "docid": "4c711149abc3af05a8e55e52eefddd97", "text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.", "title": "" }, { "docid": "29479201c12e99eb9802dd05cff60c36", "text": "Exposures to air pollution in the form of particulate matter (PM) can result in excess production of reactive oxygen species (ROS) in the respiratory system, potentially causing both localized cellular injury and triggering a systemic inflammatory response. PM-induced inflammation in the lung is modulated in large part by alveolar macrophages and their biochemical signaling, including production of inflammatory cytokines, the primary mechanism via which inflammation is initiated and sustained. We developed a robust, relevant, and flexible method employing a rat alveolar macrophage cell line (NR8383) which can be applied to routine samples of PM from air quality monitoring sites to gain insight into the drivers of PM toxicity that lead to oxidative stress and inflammation. Method performance was characterized using extracts of ambient and vehicular engine exhaust PM samples. Our results indicate that the reproducibility and the sensitivity of the method are satisfactory and comparisons between PM samples can be made with good precision. The average relative percent difference for all genes detected during 10 different exposures was 17.1%. Our analysis demonstrated that 71% of genes had an average signal to noise ratio (SNR) ≥ 3. Our time course study suggests that 4 h may be an optimal in vitro exposure time for observing short-term effects of PM and capturing the initial steps of inflammatory signaling. The 4 h exposure resulted in the detection of 57 genes (out of 84 total), of which 86% had altered expression. Similarities and conserved gene signaling regulation among the PM samples were demonstrated through hierarchical clustering and other analyses. Overlying the core congruent patterns were differentially regulated genes that resulted in distinct sample-specific gene expression \"fingerprints.\" Consistent upregulation of Il1f5 and downregulation of Ccr7 was observed across all samples, while TNFα was upregulated in half of the samples and downregulated in the other half. Overall, this PM-induced cytokine expression assay could be effectively integrated into health studies and air quality monitoring programs to better understand relationships between specific PM components, oxidative stress activity and inflammatory signaling potential.", "title": "" }, { "docid": "24297f719741f6691e5121f33bafcc09", "text": "The hypothesis that cancer is driven by tumour-initiating cells (popularly known as cancer stem cells) has recently attracted a great deal of attention, owing to the promise of a novel cellular target for the treatment of haematopoietic and solid malignancies. Furthermore, it seems that tumour-initiating cells might be resistant to many conventional cancer therapies, which might explain the limitations of these agents in curing human malignancies. Although much work is still needed to identify and characterize tumour-initiating cells, efforts are now being directed towards identifying therapeutic strategies that could target these cells. This Review considers recent advances in the cancer stem cell field, focusing on the challenges and opportunities for anticancer drug discovery.", "title": "" }, { "docid": "f4b5a2584833466fa26da00b07a7f261", "text": "This paper describes the development of the technology threat avoidance theory (TTAT), which explains individual IT users’ behavior of avoiding the threat of malicious information technologies. We articulate that avoidance and adoption are two qualitatively different phenomena and contend that technology acceptance theories provide a valuable, but incomplete, understanding of users’ IT threat avoidance behavior. Drawing from cybernetic theory and coping theory, TTAT delineates the avoidance behavior as a dynamic positive feedback loop in which users go through two cognitive processes, threat appraisal and coping appraisal, to decide how to cope with IT threats. In the threat appraisal, users will perceive an IT threat if they believe that they are susceptible Alan Dennis was the accepting senior editor for this paper. to malicious IT and that the negative consequences are severe. The threat perception leads to coping appraisal, in which users assess the degree to which the IT threat can be avoided by taking safeguarding measures based on perceived effectiveness and costs of the safeguarding measure and selfefficacy of taking the safeguarding measure. TTAT posits that users are motivated to avoid malicious IT when they perceive a threat and believe that the threat is avoidable by taking safeguarding measures; if users believe that the threat cannot be fully avoided by taking safeguarding measures, they would engage in emotion-focused coping. Integrating process theory and variance theory, TTAT enhances our understanding of human behavior under IT threats and makes an important contribution to IT security research and practice.", "title": "" }, { "docid": "fa8d8eda07b7045f69325670ba6aff27", "text": "A three-axis tactile force sensor that determines the touch and slip/friction force may advance artificial skin and robotic applications by fully imitating human skin. The ability to detect slip/friction and tactile forces simultaneously allows unknown objects to be held in robotic applications. However, the functionalities of flexible devices have been limited to a tactile force in one direction due to difficulties fabricating devices on flexible substrates. Here we demonstrate a fully printed fingerprint-like three-axis tactile force and temperature sensor for artificial skin applications. To achieve economic macroscale devices, these sensors are fabricated and integrated using only printing methods. Strain engineering enables the strain distribution to be detected upon applying a slip/friction force. By reading the strain difference at four integrated force sensors for a pixel, both the tactile and slip/friction forces can be analyzed simultaneously. As a proof of concept, the high sensitivity and selectivity for both force and temperature are demonstrated using a 3×3 array artificial skin that senses tactile, slip/friction, and temperature. Multifunctional sensing components for a flexible device are important advances for both practical applications and basic research in flexible electronics.", "title": "" }, { "docid": "36a66d72b0cdffb4ef272c4f3da54ba2", "text": "Asthma is a common disease that affects 300 million people worldwide. Given the large number of eosinophils in the airways of people with mild asthma, and verified by data from murine models, asthma was long considered the hallmark T helper type 2 (TH2) disease of the airways. It is now known that some asthmatic inflammation is neutrophilic, controlled by the TH17 subset of helper T cells, and that some eosinophilic inflammation is controlled by type 2 innate lymphoid cells (ILC2 cells) acting together with basophils. Here we discuss results from in-depth molecular studies of mouse models in light of the results from the first clinical trials targeting key cytokines in humans and describe the extraordinary heterogeneity of asthma.", "title": "" }, { "docid": "7c13ebe2897fc4870a152159cda62025", "text": "Tuberculosis (TB) remains a major health threat, killing nearly 2 million individuals around this globe, annually. The only vaccine, developed almost a century ago, provides limited protection only during childhood. After decades without the introduction of new antibiotics, several candidates are currently undergoing clinical investigation. Curing TB requires prolonged combination of chemotherapy with several drugs. Moreover, monitoring the success of therapy is questionable owing to the lack of reliable biomarkers. To substantially improve the situation, a detailed understanding of the cross-talk between human host and the pathogen Mycobacterium tuberculosis (Mtb) is vital. Principally, the enormous success of Mtb is based on three capacities: first, reprogramming of macrophages after primary infection/phagocytosis to prevent its own destruction; second, initiating the formation of well-organized granulomas, comprising different immune cells to create a confined environment for the host-pathogen standoff; third, the capability to shut down its own central metabolism, terminate replication, and thereby transit into a stage of dormancy rendering itself extremely resistant to host defense and drug treatment. Here, we review the molecular mechanisms underlying these processes, draw conclusions in a working model of mycobacterial dormancy, and highlight gaps in our understanding to be addressed in future research.", "title": "" }, { "docid": "b9c40aa4c8ac9d4b6cbfb2411c542998", "text": "This review will summarize molecular and genetic analyses aimed at identifying the mechanisms underlying the sequence of events during plant zygotic embryogenesis. These events are being studied in parallel with the histological and morphological analyses of somatic embryogenesis. The strength and limitations of somatic embryogenesis as a model system will be discussed briefly. The formation of the zygotic embryo has been described in some detail, but the molecular mechanisms controlling the differentiation of the various cell types are not understood. In recent years plant molecular and genetic studies have led to the identification and characterization of genes controlling the establishment of polarity, tissue differentiation and elaboration of patterns during embryo development. An investigation of the developmental basis of a number of mutant phenotypes has enabled the identification of gene activities promoting (1) asymmetric cell division and polarization leading to heterogeneous partitioning of the cytoplasmic determinants necessary for the initiation of embryogenesis (e.g. GNOM), (2) the determination of the apical-basal organization which is established independently of the differentiation of the tissues of the radial pattern elements (e.g. KNOLLE, FACKEL, ZWILLE), (3) the differentiation of meristems (e.g. SHOOT-MERISTEMLESS), and (4) the formation of a mature embryo characterized by the accumulation of LEA and storage proteins. The accumulation of these two types of proteins is controlled by ABA-dependent regulatory mechanisms as shown using both ABA-deficient and ABA-insensitive mutants (e.g. ABA, ABI3). Both types of embryogenesis have been studied by different techniques and common features have been identified between them. In spite of the relative difficulty of identifying the original cells involved in the developmental processes of somatic embryogenesis, common regulatory mechanisms are probably involved in the first stages up to the globular form. Signal molecules, such as growth regulators, have been shown to play a role during development of both types of embryos. The most promising method for identifying regulatory mechanisms responsible for the key events of embryogenesis will come from molecular and genetic analyses. The mutations already identified will shed light on the nature of the genes that affect developmental processes as well as elucidating the role of the various regulatory genes that control plant embryogenesis.", "title": "" } ]
scidocsrr
8036576ca9dc4f0e628ee3af46abf8bd
Festival visitors’ satisfaction and loyalty: An example of small, local, and municipality organized festival
[ { "docid": "609041388f4b3744d5f1327397bcde7f", "text": "This article reviews ‘event tourism’ as both professional practice and a field of academic study. The origins and evolution of research on event tourism are pinpointed through both chronological and thematic literature reviews. A conceptual model of the core phenomenon and key themes in event tourism studies is provided as a framework for spurring theoretical advancement, identifying research gaps, and assisting professional practice. Conclusions are in two parts: a discussion of implications for the practice of event management and tourism, and implications are drawn for advancing theory in event tourism. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e82b6e8a825fc5403b7e7e7be9c68796", "text": "JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. American Marketing Association is collaborating with JSTOR to digitize, preserve and extend access to The Journal of Marketing.", "title": "" } ]
[ { "docid": "801a197f630189ab0a9b79d3cbfe904b", "text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.", "title": "" }, { "docid": "7989dfc34af10676c1c9fa2bc0f61461", "text": "As researchers and industry alike are proposing TV interfaces that use gestures in their designs, understanding users' preferences for gesture commands becomes an important problem. However, no rules or guidelines currently exist to assist designers and practitioners of such interfaces. The paper presents the results of the first study investigating users' preferences for free-hand gestures when controlling the TV set. By conducting an agreement analysis on user-elicited gestures, a set of gesture commands is proposed for basic TV control tasks. Also, guidelines and recommendations issued from observed user behavior are provided to assist practitioners interested in prototyping free-hand gestural designs for the interactive TV.", "title": "" }, { "docid": "d9aa9df213fd244469b66d39952d4949", "text": "We present an efficient Hough transform for automatic detection of cylinders in point clouds. As cylinders are one of the most frequently used primitives for industrial design, automatic and robust methods for their detection and fitting are essential for reverse engineering from point clouds. The current methods employ automatic segmentation followed by geometric fitting, which requires a lot of manual interaction during modelling. Although Hough transform can be used for automatic detection of cylinders, the required 5D Hough space has a prohibitively high time and space complexity for most practical applications. We address this problem in this paper and present a sequential Hough transform for automatic detection of cylinders in point clouds. Our algorithm consists of two sequential steps of low dimensional Hough transforms. The first step, called Orientation Estimation, uses the Gaussian sphere of the input data and performs a 2D Hough Transform for finding strong hypotheses for the direction of cylinder axis. The second step of Position and Radius Estimation, consists of a 3D Hough transform for estimating cylinder position and radius. This sequential breakdown reduces the space and time complexity while retaining the advantages of robustness against outliers and multiple instances. The results of applying this algorithm to real data sets from two industrial sites are presented that demonstrate the effectiveness of this procedure for automatic cylinder detection.", "title": "" }, { "docid": "5b41a7c287b54b16e9d791cb62d7aa5a", "text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.", "title": "" }, { "docid": "1aa7e7fe70bdcbc22b5d59b0605c34e9", "text": "Surgical tasks are complex multi-step sequences of smaller subtasks (often called surgemes) and it is useful to segment task demonstrations into meaningful subsequences for:(a) extracting finite-state machines for automation, (b) surgical training and skill assessment, and (c) task classification. Existing supervised methods for task segmentation use segment labels from a dictionary of motions to build classifiers. However, as the datasets become voluminous, the labeling becomes arduous and further, this method doesnt́ generalize to new tasks that dont́ use the same dictionary. We propose an unsupervised semantic task segmentation framework by learning “milestones”, ellipsoidal regions of the position and feature states at which a task transitions between motion regimes modeled as locally linear. Milestone learning uses a hierarchy of Dirichlet Process Mixture Models, learned through Expectation-Maximization, to cluster the transition points and optimize the number of clusters. It leverages transition information from kinematic state as well as environment state such as visual features. We also introduce a compaction step which removes repetitive segments that correspond to a mid-demonstration failure recovery by retrying an action. We evaluate Milestones Learning on three surgical subtasks: pattern cutting, suturing, and needle passing. Initial results suggest that our milestones qualitatively match manually annotated segmentation. While one-to-one correspondence of milestones with annotated data is not meaningful, the milestones recovered from our method have exactly one annotated surgeme transition in 74% (needle passing) and 66% (suturing) of total milestones, indicating a semantic match.", "title": "" }, { "docid": "5bee27378a98ff5872f7ae5e899f81e2", "text": "An algorithmic framework is proposed to process acceleration and surface electromyographic (SEMG) signals for gesture recognition. It includes a novel segmentation scheme, a score-based sensor fusion scheme, and two new features. A Bayes linear classifier and an improved dynamic time-warping algorithm are utilized in the framework. In addition, a prototype system, including a wearable gesture sensing device (embedded with a three-axis accelerometer and four SEMG sensors) and an application program with the proposed algorithmic framework for a mobile phone, is developed to realize gesture-based real-time interaction. With the device worn on the forearm, the user is able to manipulate a mobile phone using 19 predefined gestures or even personalized ones. Results suggest that the developed prototype responded to each gesture instruction within 300 ms on the mobile phone, with the average accuracy of 95.0% in user-dependent testing and 89.6% in user-independent testing. Such performance during the interaction testing, along with positive user experience questionnaire feedback, demonstrates the utility of the framework.", "title": "" }, { "docid": "80ff93b5f2e0ff3cff04c314e28159fc", "text": "In the past 30 years there has been a growing body of research using different methods (behavioural, electrophysiological, neuropsychological, TMS and imaging studies) asking whether processing words from different grammatical classes (especially nouns and verbs) engage different neural systems. To date, however, each line of investigation has provided conflicting results. Here we present a review of this literature, showing that once we take into account the confounding in most studies between semantic distinctions (objects vs. actions) and grammatical distinction (nouns vs. verbs), and the conflation between studies concerned with mechanisms of single word processing and those studies concerned with sentence integration, the emerging picture is relatively clear-cut: clear neural separability is observed between the processing of object words (nouns) and action words (typically verbs), grammatical class effects emerge or become stronger for tasks and languages imposing greater processing demands. These findings indicate that grammatical class per se is not an organisational principle of knowledge in the brain; rather, all the findings we review are compatible with two general principles described by typological linguistics as underlying grammatical class membership across languages: semantic/pragmatic, and distributional cues in language that distinguish nouns from verbs. These two general principles are incorporated within an emergentist view which takes these constraints into account.", "title": "" }, { "docid": "692174cc5dd763333cebbea576c8930b", "text": "The Histograms of Oriented Gradients (HOG) descriptor represents shape information by storing the local gradients in an image. The Haar wavelet transform is a simple yet powerful technique that can separately enhance the horizontal and vertical local features in an image. In this paper, we enhance the HOG descriptor by subjecting the image to the Haar wavelet transform and then computing HOG from the result in a manner that enriches the shape information encoded in the descriptor. First, we define the novel HaarHOG descriptor for grayscale images and extend this idea for color images. Second, we compare the image recognition performance of the HaarHOG descriptor with the traditional HOG descriptor in four different color spaces and grayscale. Finally, we compare the image classification performance of the HaarHOG descriptor with some popular descriptors used by other researchers on four grand challenge datasets.", "title": "" }, { "docid": "690a2b067af8810d5da7d3389b7b4d78", "text": "Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NPcomplete problem. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or deliver low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms (Fast-Lin,Fast-Lip) that are able to certify non-trivial lower bounds of minimum adversarial distortions. Experiments show that (1) our methods deliver bounds close to (the gap is 2-3X) exact minimum distortions found by Reluplex in small networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 3314,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that there is no polynomial time algorithm that can approximately find the minimum `1 adversarial distortion of a ReLU network with a 0.99 lnn approximation ratio unless NP=P, where n is the number of neurons in the network. Equal contribution Massachusetts Institute of Technology, Cambridge, MA UC Davis, Davis, CA Harvard University, Cambridge, MA UT Austin, Austin, TX. Source code is available at https://github.com/huanzhang12/CertifiedReLURobustness. Correspondence to: Tsui-Wei Weng <[email protected]>, Huan Zhang <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).", "title": "" }, { "docid": "e7f28b6e8102f4f133fddb85ffe50eef", "text": "Mainstream machine learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful machine learning models. Here we show how to surpass this bottleneck and illustrate our findings by training probabilistic generative models with arbitrary pairwise connectivity on a real dataset of handwritten digits and two synthetic datasets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Boltzmann-like distribution. Therefore, the need to infer the effective temperature at each iteration is avoided, speeding up learning, and the effect of noise in the control parameters is mitigated, improving accuracy. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models of real datasets and provides a suitable framework for benchmarking these quantum technologies on tasks qualitatively different from the traditional ones.", "title": "" }, { "docid": "483a349f65e1524916ea0190ecf4e18b", "text": "Physical library collections are valuable and long standing resources for knowledge and learning. However, managing books in a large bookshelf and finding books on it often leads to tedious manual work, especially for large book collections where books might be missing or misplaced. Recently, deep neural models, such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have achieved great success for scene text detection and recognition. Motivated by these recent successes, we aim to investigate their viability in facilitating book management, a task that introduces further challenges including large amounts of cluttered scene text, distortion, and varied lighting conditions. In this paper, we present a library inventory building and retrieval system based on scene text reading methods. We specifically design our scene text recognition model using rich supervision to accelerate training and achieve state-of-the-art performance on several benchmark datasets. Our proposed system has the potential to greatly reduce the amount of human labor required in managing book inventories as well as the space needed to store book information.", "title": "" }, { "docid": "a2bdce49cd3faabd3b0afbe0abd8ef54", "text": "The revolution of World Wide Web (WWW) and smart-phone technologies have been the key-factor behind remarkable success of social networks. With the ease of availability of check-in data, the location-based social networks (LBSN) (e.g., Facebook, etc.) have been heavily explored in the past decade for Point-of-Interest (POI) recommendation. Though many POI recommenders have been defined, most of them have focused on recommending a single location or an arbitrary list that is not contextually coherent. It has been cumbersome to rely on such systems when one needs a contextually coherent list of locations, that can be used for various day-to-day activities, for e.g., itinerary planning. This paper proposes a model termed as CAPS (Context Aware Personalized POI Sequence Recommender System) that generates contextually coherent POI sequences relevant to user preferences. To the best of our knowledge, CAPS is the first attempt to formulate the contextual POI sequence modeling by extending Recurrent Neural Network (RNN) and its variants. CAPS extends RNN by incorporating multiple contexts to the hidden layer and by incorporating global context (sequence features) to the hidden layers and output layer. It extends the variants of RNN (e.g., Long-short term memory (LSTM)) by incorporating multiple contexts and global features in the gate update relations. The major contributions of this paper are: (i) it models the contextual POI sequence problem by incorporating personalized user preferences through multiple constraints (e.g., categorical, social, temporal, etc.), (ii) it extends RNN to incorporate the contexts of individual item and that of whole sequence. It also extends the gated functionality of variants of RNN to incorporate the multiple contexts, and (iii) it evaluates the proposed models against two real-world data sets.", "title": "" }, { "docid": "aff973bc6789375b6518814dbcfde4d9", "text": "OBJECTIVE\nThis paper aims to report on the accuracy of estimating sleep stages using a wrist-worn device that measures movement using a 3D accelerometer and an optical pulse photoplethysmograph (PPG).\n\n\nAPPROACH\nOvernight recordings were obtained from 60 adult participants wearing these devices on their left and right wrist, simultaneously with a Type III home sleep testing device (Embletta MPR) which included EEG channels for sleep staging. The 60 participants were self-reported normal sleepers (36 M: 24 F, age  =  34  ±  10, BMI  =  28  ±  6). The Embletta recordings were scored for sleep stages using AASM guidelines and were used to develop and validate an automated sleep stage estimation algorithm, which labeled sleep stages as one of Wake, Light (N1 or N2), Deep (N3) and REM (REM). Features were extracted from the accelerometer and PPG sensors, which reflected movement, breathing and heart rate variability.\n\n\nMAIN RESULTS\nBased on leave-one-out validation, the overall per-epoch accuracy of the automated algorithm was 69%, with a Cohen's kappa of 0.52  ±  0.14. There was no observable bias to under- or over-estimate wake, light, or deep sleep durations. REM sleep duration was slightly over-estimated by the system. The most common misclassifications were light/REM and light/wake mislabeling.\n\n\nSIGNIFICANCE\nThe results indicate that a reasonable degree of sleep staging accuracy can be achieved using a wrist-worn device, which may be of utility in longitudinal studies of sleep habits.", "title": "" }, { "docid": "565b07fee5a5812d04818fa132c0da4c", "text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.", "title": "" }, { "docid": "f7085b22f8ffe30aef9a2c8f22cd6741", "text": "BACKGROUND\nThis study aimed to determine whether there were sensitive periods when a first exposure to trauma was most associated with emotion dysregulation symptoms in adulthood.\n\n\nMETHODS\nAdult participants came from a public urban hospital in Atlanta, GA (n = 1944). Lifetime trauma exposure was assessed using the Traumatic Events Inventory (TEI). Multiple linear regression models were used to assess the association between the developmental timing of first trauma exposure, classified as early childhood (ages 0-5), middle childhood (ages 6-10), adolescence (ages 11-18), and adulthood (ages 19+), on adult emotion dysregulation symptoms, measured using the abbreviated Emotion Dysregulation Scale.\n\n\nRESULTS\nParticipants exposed to trauma at any age had higher emotion dysregulation scores than their unexposed peers. However, participants first exposed to child maltreatment or interpersonal violence during middle childhood had higher emotion dysregulation scores relative to those first exposed during other developmental stages; these developmental timing differences were detected even after controlling for sociodemographic factors, exposure to other trauma, and frequency of exposure to trauma. Further, after controlling for current psychiatric symptoms, the effect of other interpersonal trauma exposure in middle childhood was diminished and first exposure to other interpersonal violence in early childhood was associated with significantly lower emotion dysregulation symptoms.\n\n\nLIMITATIONS\nLimitations of this study include the use of retrospective reports and absence of complete information about trauma severity or duration.\n\n\nCONCLUSION\nThese findings should be replicated in other population-based samples with prospective designs to confirm the importance of developmental timing of trauma on later emotion dysregulation.", "title": "" }, { "docid": "985e19556726656ddfeb07703d27dde7", "text": "PURPOSE\nThis study evaluated the long-term survival of anterior porcelain laminate veneers placed with and without incisal porcelain coverage.\n\n\nMATERIALS AND METHODS\nTwo prosthodontists in a private dental practice placed 110 labial feldspathic porcelain veneers in 50 patients; 46 veneers were provided with incisal porcelain coverage, and 64 were not. The veneers were evaluated retrospectively from case records for up to 7 years (mean 4 years).\n\n\nRESULTS\nAt 5, 6, and 7 years, the cumulative survival estimates were 95.8% for veneers with incisal porcelain coverage and 85.5% for those without incisal coverage. The difference was not statistically significant. Six of the nine failures occurred from porcelain fracture in the veneers without incisal coverage.\n\n\nCONCLUSION\nAlthough there was a trend for better long-term survival of the veneers with incisal porcelain coverage, this finding was not statistically significant.", "title": "" }, { "docid": "a45c93e89cc3df3ebec59eb0c81192ec", "text": "We study a variant of the capacitated vehicle routing problem where the cost over each arc is defined as the product of the arc length and the weight of the vehicle when it traverses that arc. We propose two new mixed integer linear programming formulations for the problem: an arc-load formulation and a set partitioning formulation based on q-routes with additional constraints. A family of cycle elimination constraints are derived for the arc-load formulation. We then compare the linear programming (LP) relaxations of these formulations with the twoindex one-commodity flow formulation proposed in the literature. In particular, we show that the arc-load formulation with the new cycle elimination constraints gives the same LP bound as the set partitioning formulation based on 2-cycle-free q-routes, which is stronger than the LP bound given by the two-index one-commodity flow formulation. We propose a branchand-cut algorithm for the arc-load formulation, and a branch-cut-and-price algorithm for the set partitioning formulation strengthened by additional constraints. Computational results on instances from the literature demonstrate that a significant improvement can be achieved by the branch-cut-and-price algorithm over other methods.", "title": "" }, { "docid": "d13b4d08c29049a89d98c410bd834421", "text": "Sodium-ion batteries offer an attractive option for potential low cost and large scale energy storage due to the earth abundance of sodium. Red phosphorus is considered as a high capacity anode for sodium-ion batteries with a theoretical capacity of 2596 mAh/g. However, similar to silicon in lithium-ion batteries, several limitations, such as large volume expansion upon sodiation/desodiation and low electronic conductance, have severely limited the performance of red phosphorus anodes. In order to address the above challenges, we have developed a method to deposit red phosphorus nanodots densely and uniformly onto reduced graphene oxide sheets (P@RGO) to minimize the sodium ion diffusion length and the sodiation/desodiation stresses, and the RGO network also serves as electron pathway and creates free space to accommodate the volume variation of phosphorus particles. The resulted P@RGO flexible anode achieved 1165.4, 510.6, and 135.3 mAh/g specific charge capacity at 159.4, 31878.9, and 47818.3 mA/g charge/discharge current density in rate capability test, and a 914 mAh/g capacity after 300 deep cycles in cycling stability test at 1593.9 mA/g current density, which marks a significant performance improvement for red phosphorus anodes for sodium-ion chemistry and flexible power sources for wearable electronics.", "title": "" }, { "docid": "0cb3d77cfe1d355e948f55e18717ca22", "text": "This Wireless Mobile Battery Charger project is using technique of inductive coupling. The basic concept of his technique was applied in transformer construction. With this technique, the power from AC or DC can be transfer through the medium of magnetic field or air space. In this project, the method is divided into two major activities which is to purpose circuit construction and to fabricate the prototype. The result is to evaluate the distance of power that can be transferred using technique of inductive coupling.", "title": "" }, { "docid": "56ef8e06338afb791728783787f0eefb", "text": "The definition of a modern family is changing. In this case study, we describe the breastfeeding experience of a child receiving human milk from all 3 of his mothers: his 2 adoptive mothers, who induced lactation to nurse him, and his birth mother, who shared in his early feeding during the open adoption process and continued to pump and send milk to him for several months. We review the lactation protocol used by his adoptive mothers and the unique difficulties inherent in this multi-mother family dynamic. Both adoptive mothers successfully induced moderate milk production using a combination of hormonal birth control, domperidone, herbal supplements, and a schedule of breast pumping. However, because of the increased complexity of the immediate postpartum period and concerns with defining parental roles in a same-sex marriage, maintenance of milk production was difficult.", "title": "" } ]
scidocsrr
c4f854c1dc799d9701d8c708a58bf9f6
CoReCast: Collision Resilient Broadcasting in Vehicular Networks
[ { "docid": "65e3890edd57a0a6de65b4e38f3cea1c", "text": "This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an `1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of `1-analysis for such problems.", "title": "" }, { "docid": "8d6da0919363f3c528e9105ee41b0315", "text": "There is a long-standing vision of embedding backscatter nodes like RFIDs into everyday objects to build ultra-low power ubiquitous networks. A major problem that has challenged this vision is that backscatter communication is neither reliable nor efficient. Backscatter nodes cannot sense each other, and hence tend to suffer from colliding transmissions. Further, they are ineffective at adapting the bit rate to channel conditions, and thus miss opportunities to increase throughput, or transmit above capacity causing errors.\n This paper introduces a new approach to backscatter communication. The key idea is to treat all nodes as if they were a single virtual sender. One can then view collisions as a code across the bits transmitted by the nodes. By ensuring only a few nodes collide at any time, we make collisions act as a sparse code and decode them using a new customized compressive sensing algorithm. Further, we can make these collisions act as a rateless code to automatically adapt the bit rate to channel quality --i.e., nodes can keep colliding until the base station has collected enough collisions to decode. Results from a network of backscatter nodes communicating with a USRP backscatter base station demonstrate that the new design produces a 3.5× throughput gain, and due to its rateless code, reduces message loss rate in challenging scenarios from 50% to zero.", "title": "" }, { "docid": "766bc5cee369a729dc310c7134edc36e", "text": "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70% with two antennas over 802.11.", "title": "" } ]
[ { "docid": "8d258bac9030dae406fff2c13ae0db43", "text": "This paper investigates the validity of Kleinberg’s axioms for clustering functions with respect to the quite popular clustering algorithm called k-means.We suggest that the reason why this algorithm does not fit Kleinberg’s axiomatic system stems from missing match between informal intuitions and formal formulations of the axioms. While Kleinberg’s axioms have been discussed heavily in the past, we concentrate here on the case predominantly relevant for k-means algorithm, that is behavior embedded in Euclidean space. We point at some contradictions and counter intuitiveness aspects of this axiomatic set within R that were evidently not discussed so far. Our results suggest that apparently without defining clearly what kind of clusters we expect we will not be able to construct a valid axiomatic system. In particular we look at the shape and the gaps between the clusters. Finally we demonstrate that there exist several ways to reconcile the formulation of the axioms with their intended meaning and that under this reformulation the axioms stop to be contradictory and the real-world k-means algorithm conforms to this axiomatic system.", "title": "" }, { "docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05", "text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.", "title": "" }, { "docid": "451434f1181c021eb49442d6eb6617c5", "text": "In this paper, we use variational recurrent neural network to investigate the anomaly detection problem on graph time series. The temporal correlation is modeled by the combination of recurrent neural network (RNN) and variational inference (VI), while the spatial information is captured by the graph convolutional network. In order to incorporate external factors, we use feature extractor to augment the transition of latent variables, which can learn the influence of external factors. With the target function as accumulative ELBO, it is easy to extend this model to on-line method. The experimental study on traffic flow data shows the detection capability of the proposed method.", "title": "" }, { "docid": "1759b81ec84163a829b2dc16a75d3fa6", "text": "Today's memory technologies, such as DRAM, SRAM, and NAND Flash, are facing major challenges with regard to their continued scaling. For instance, ITRS projects that DRAM cannot scale easily below 40nm as the cost and energy/power are hard -if not impossible- to scale. Fortunately, the international memory technology community has been researching other alternative for more than fifteen years. Apparently, non-volatile resistive memories are promising to replace the today's memories for many reasons such as better scalability, low cost, higher capacity, lower energy, CMOS compatibility, better configurability, etc. This paper discusses and highlights three major aspects of resistive memories, especially memristor based memories: (a) technology and design constraints, (b) architectures, and (c) testing and design-for-test. It shows the opportunities and the challenges.", "title": "" }, { "docid": "5d88d94da2fd8be95ed4258c5ff24f9a", "text": "Database query processing traditionally relies on three alternative join algorithms: index nested loops join exploits an index on its inner input, merge join exploits sorted inputs, and hash join exploits differences in the sizes of the join inputs. Cost-based query optimization chooses the most appropriate algorithm for each query and for each operation. Unfortunately, mistaken algorithm choices during compile-time query optimization are common yet expensive to investigate and to resolve. Our goal is to end mistaken choices among join algorithms by replacing the three traditional join algorithms with a single one. Like merge join, this new join algorithm exploits sorted inputs. Like hash join, it exploits different input sizes for unsorted inputs. In fact, for unsorted inputs, the cost functions for recursive hash join and for hybrid hash join have guided our search for the new join algorithm. In consequence, the new join algorithm can replace both merge join and hash join in a database management system. The in-memory components of the new join algorithm employ indexes. If the database contains indexes for one (or both) of the inputs, the new join can exploit persistent indexes instead of temporary in-memory indexes. Using database indexes to match input records, the new join algorithm can also replace index nested loops join. Results from an implementation of the core algorithm are reported.", "title": "" }, { "docid": "a97f71e0d5501add1ae08eeee5378045", "text": "Machine learning is being implemented in bioinformatics and computational biology to solve challenging problems emerged in the analysis and modeling of biological data such as DNA, RNA, and protein. The major problems in classifying protein sequences into existing families/superfamilies are the following: the selection of a suitable sequence encoding method, the extraction of an optimized subset of features that possesses significant discriminatory information, and the adaptation of an appropriate learning algorithm that classifies protein sequences with higher classification accuracy. The accurate classification of protein sequence would be helpful in determining the structure and function of novel protein sequences. In this article, we have proposed a distance-based sequence encoding algorithm that captures the sequence’s statistical characteristics along with amino acids sequence order information. A statistical metric-based feature selection algorithm is then adopted to identify the reduced set of features to represent the original feature space. The performance of the proposed technique is validated using some of the best performing classifiers implemented previously for protein sequence classification. An average classification accuracy of 92% was achieved on the yeast protein sequence data set downloaded from the benchmark UniProtKB database.", "title": "" }, { "docid": "b5c27fa3dbcd917f7cdc815965b22a67", "text": "Our aim is to provide a pixel-wise instance-level labeling of a monocular image in the context of autonomous driving. We build on recent work [32] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [32] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [15]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [32].", "title": "" }, { "docid": "c8787aa5e3d00452dbf7aaa93c2a4307", "text": "In recent years, several mobile applications allowed individuals to anonymously share information with friends and contacts, without any persistent identity marker. The functions of these \"tie-based\" anonymity services may be notably different than other social media services. We use semi-structured interviews to qualitatively examine motivations, practices and perceptions in two tie-based anonymity apps: Secret (now defunct, in the US) and Mimi (in China). Among the findings, we show that: (1) while users are more comfortable in self-disclosure, they still have specific practices and strategies to avoid or allow identification; (2) attempts for deidentification of others are prevalent and often elaborate; and (3) participants come to expect both negativity and support in response to posts. Our findings highlight unique opportunities and potential benefits for tie-based anonymity apps, including serving disclosure needs and social probing. Still, challenges for making such applications successful, for example the prevalence of negativity and bullying, are substantial.", "title": "" }, { "docid": "2a13609a94050c4477d94cf0d89cbdd3", "text": "In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.", "title": "" }, { "docid": "e615ff8da6cdd43357e41aa97df88cc0", "text": "In recent years, increasing numbers of people have been choosing herbal medicines or products to improve their health conditions, either alone or in combination with others. Herbs are staging a comeback and herbal \"renaissance\" occurs all over the world. According to the World Health Organization, 75% of the world's populations are using herbs for basic healthcare needs. Since the dawn of mankind, in fact, the use of herbs/plants has offered an effective medicine for the treatment of illnesses. Moreover, many conventional/pharmaceutical drugs are derived directly from both nature and traditional remedies distributed around the world. Up to now, the practice of herbal medicine entails the use of more than 53,000 species, and a number of these are facing the threat of extinction due to overexploitation. This paper aims to provide a review of the history and status quo of Chinese, Indian, and Arabic herbal medicines in terms of their significant contribution to the health promotion in present-day over-populated and aging societies. Attention will be focused on the depletion of plant resources on earth in meeting the increasing demand for herbs.", "title": "" }, { "docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8", "text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.", "title": "" }, { "docid": "dae40fa32526bf965bad70f98eb51bb7", "text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.", "title": "" }, { "docid": "22293b6953e2b28e1b3dc209649a7286", "text": "The Liquid State Machine (LSM) has emerged as a computational model that is more adequate than the Turing machine for describing computations in biological networks of neurons. Characteristic features of this new model are (i) that it is a model for adaptive computational systems, (ii) that it provides a method for employing randomly connected circuits, or even “found” physical objects for meaningful computations, (iii) that it provides a theoretical context where heterogeneous, rather than stereotypical, local gates or processors increase the computational power of a circuit, (iv) that it provides a method for multiplexing different computations (on a common input) within the same circuit. This chapter reviews the motivation for this model, its theoretical background, and current work on implementations of this model in innovative artificial computing devices.", "title": "" }, { "docid": "924768b271caa9d1ba0cb32ab512f92e", "text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.", "title": "" }, { "docid": "b7dd7ad186b55f02724e89f1d29dd285", "text": "The Web of Linked Data is built upon the idea that data items on the Web are connected by RDF links. Sadly, the reality on the Web shows that Linked Data sources set some RDF links pointing at data items in related data sources, but they clearly do not set RDF links to all data sources that provide related data. In this paper, we present Silk Server, an identity resolution component, which can be used within Linked Data application architectures to augment Web data with additional RDF links. Silk Server is designed to be used with an incoming stream of RDF instances, produced for example by a Linked Data crawler. Silk Server matches the RDF descriptions of incoming instances against a local set of known instances and discovers missing links between them. Based on this assessment, an application can store data about newly discovered instances in its repository or fuse data that is already known about an entity with additional data about the entity from the Web. Afterwards, we report on the results of an experiment in which Silk Server was used to generate RDF links between authors and publications from the Semantic Web Dog Food Corpus and a stream of FOAF profiles that were crawled from the Web.", "title": "" }, { "docid": "9e91f7e57e074ec49879598c13035d70", "text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "204f7e7763b447c1aeff1dc6fb639786", "text": "Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data. Within this setting, we propose an algorithm that uses a symbolic solver to efficiently sample programs. The proposal combines constraint-based program synthesis with sampling via random parity constraints. We give theoretical guarantees on how well the samples approximate the true posterior, and have empirical results showing the algorithm is efficient in practice, evaluating our approach on 22 program learning problems in the domains of text editing and computer-aided programming.", "title": "" }, { "docid": "3a6197322da0e5fe2c2d98a8fcba7a42", "text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.", "title": "" }, { "docid": "f69ff67f18e9bd7f5c21a4ee160b24c8", "text": "In this paper, we propose a novel sequential neural network with structure attention to model information diffusion. The proposed model explores both sequential nature of an information diffusion process and structural characteristics of user connection graph. The recurrent neural network framework is employed to model the sequential information. The attention mechanism is incorporated to capture the structural dependency among users, which is defined as the diffusion context of a user. A gating mechanism is further developed to effectively integrate the sequential and structural information. The proposed model is evaluated on the diffusion prediction task. The performances on both synthetic and real datasets demonstrate its superiority over popular baselines and state-of-the-art sequence-based models.", "title": "" } ]
scidocsrr
f24dc8df6e3f1487ef87a385a01c329c
Exploring the Impact of IT Service Management Process Improvement Initiatives: A Case Study Approach
[ { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "b769f7b96b9613132790a73752c2a08f", "text": "ITIL is the most widely used IT framework in majority of organizations in the world now. However, implementing such best practice experiences in an organization comes with some implementation challenges such as staff resistance, task conflicts and ambiguous orders. It means that implementing such framework is not easy and it can be caused of the organization destruction. This paper tries to describe overall view of ITIL framework and address major reasons on the failure of this framework’s implementation in the organizations", "title": "" } ]
[ { "docid": "e8cf458c60dc7b4a8f71df2fabf1558d", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "8b11c5c6b134576d8ce7ce3484e17822", "text": "The popularity and complexity of online social networks (OSNs) continues to grow unabatedly with the most popular applications featuring hundreds of millions of active users. Ranging from social communities and discussion groups, to recommendation engines, tagging systems, mobile social networks, games, and virtual worlds, OSN applications have not only shifted the focus of application developers to the human factor, but have also transformed traditional application paradigms such as the way users communicate and navigate in the Internet. Indeed, understanding user behavior is now an integral part of online services and applications, with system and algorithm design becoming in effect user-centric. As expected, this paradigm shift has not left the research community unaffected, triggering intense research interest in the analysis of the structure and properties of online communities.", "title": "" }, { "docid": "ad88d2e2213624270328be0aa019b5cd", "text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.", "title": "" }, { "docid": "1c617d1d7e5a37b1655ea63f80f8f2cd", "text": "Possible cancer hazards from pesticide residues in food have been much discussed and hotly debated in the scientific literature, the popular press, the political arena, and the courts. Consumer opinion surveys indicate that much of the U.S. public believes that pesticide residues in food are a serious cancer hazard (Opinion Research Corporation, 1990). In contrast, epidemiologic studies indicate that the major preventable risk factors for cancer are smoking, dietary imbalances, endogenous hormones, and inflammation (e.g., from chronic infections). Other important factors include intense sun exposure, lack of physical activity, and excess alcohol consumption (Ames et al., 1995). The types of cancer deaths that have decreased since 1950 are primarily stomach, cervical, uterine, and colorectal. Overall cancer death rates in the United States (excluding lung cancer) have declined 19% since 1950 (Ries et al., 2000). The types that have increased are primarily lung cancer [87% is due to smoking, as are 31% of all cancer deaths in the United States (American Cancer Society, 2000)], melanoma (probably due to sunburns), and non-Hodgkin’s lymphoma. If lung cancer is included, mortality rates have increased over time, but recently have declined (Ries et al., 2000). Thus, epidemiological studies do not support the idea that synthetic pesticide residues are important for human cancer. Although some epidemiologic studies find an association between cancer and low levels of some industrial pollutants, the studies often have weak or inconsistent results, rely on ecological correlations or indirect exposure assessments, use small sample sizes, and do not control for confounding factors such as composition of the diet, which is a potentially important confounding factor. Outside the workplace, the levels of exposure to synthetic pollutants or pesticide residues are low and rarely seem toxicologically plausible as a causal factor when compared to the wide variety of naturally occurring chemicals to which all people are exposed (Ames et al., 1987, 1990a; Gold et al., 1992). Whereas public perceptions tend to identify chemicals as being only synthetic and only synthetic chemicals as being toxic, every natural chemical is also toxic at some dose, and the vast proportion of chemicals to which humans are exposed are naturally occurring (see Section 38.2). There is, however, a paradox in the public concern about possible cancer hazards from pesticide residues in food and the lack of public understanding of the substantial evidence indicating that high consumption of the foods that contain pesticide residues—fruits and vegetables—has a protective effect against many types of cancer. A review of about 200 epidemiological studies reported a consistent association between low consumption of fruits and vegetables and cancer incidence at many target sites (Block et al., 1992; Hill et al., 1994; Steinmetz and Potter, 1991). The quarter of the population with the lowest dietary intake of fruits and vegetables has roughly twice the cancer rate for many types of cancer (lung, larynx, oral cavity, esophagus, stomach, colon and rectum, bladder, pancreas, cervix, and ovary) compared to the quarter with the highest consumption of those foods. The protective effect of consuming fruits and vegetables is weaker and less consistent for hormonally related cancers, such as breast and prostate. Studies suggest that inadequate intake of many micronutrients in these foods may be radiation mimics and are important in the carcinogenic effect (Ames, 2001). Despite the substantial evidence of the importance of fruits and vegetables in prevention, half the American", "title": "" }, { "docid": "8182fe419366744a774ff637c8ace5dd", "text": "The most useful environments for advancing research and development in video databases are those that provide complete video database management, including (1) video preprocessing for content representation and indexing, (2) storage management for video, metadata and indices, (3) image and semantic -based query processing, (4) realtime buffer management, and (5) continuous media streaming. Such environments support the entire process of investigating, implementing, analyzing and evaluating new techniques, thus identifying in a concrete way which techniques are truly practical and robust. In this paper we present a video database research initiative that culminated in the successful development of VDBMS, a video database research platform that supports comprehensive and efficient database management for digital video. We describe key video processing components of the system and illustrate the value of VDBMS as a research platform by describing several research projects carried out within the VDBMS environment. These include MPEG7 document support for video feature import and export, a new query operator for optimal multi-feature image similarity matching, secure access control for streaming video, and the mining of medical video data using hierarchical content organization.", "title": "" }, { "docid": "6d096dc86d240370bef7cc4e4cdd12e5", "text": "Modern software systems are subject to uncertainties, such as dynamics in the availability of resources or changes of system goals. Self-adaptation enables a system to reason about runtime models to adapt itself and realises its goals under uncertainties. Our focus is on providing guarantees for adaption goals. A prominent approach to provide such guarantees is automated verification of a stochastic model that encodes up-to-date knowledge of the system and relevant qualities. The verification results allow selecting an adaption option that satisfies the goals. There are two issues with this state of the art approach: i) changing goals at runtime (a challenging type of uncertainty) is difficult, and ii) exhaustive verification suffers from the state space explosion problem. In this paper, we propose a novel modular approach for decision making in self-adaptive systems that combines distinct models for each relevant quality with runtime simulation of the models. Distinct models support on the fly changes of goals. Simulation enables efficient decision making to select an adaptation option that satisfies the system goals. The tradeoff is that simulation results can only provide guarantees with a certain level of accuracy. We demonstrate the benefits and tradeoffs of the approach for a service-based telecare system.", "title": "" }, { "docid": "c47f0c67147705e91ccf24250c2ec2de", "text": "Here, we have strategically synthesized stable gold (AuNPsTyr, AuNPsTrp) and silver (AgNPsTyr) nanoparticles which are surface functionalized with either tyrosine or tryptophan residues and have examined their potential to inhibit amyloid aggregation of insulin. Inhibition of both spontaneous and seed-induced aggregation of insulin was observed in the presence of AuNPsTyr, AgNPsTyr, and AuNPsTrp nanoparticles. These nanoparticles also triggered the disassembly of insulin amyloid fibrils. Surface functionalization of amino acids appears to be important for the inhibition effect since isolated tryptophan and tyrosine molecules did not prevent insulin aggregation. Bioinformatics analysis predicts involvement of tyrosine in H-bonding interactions mediated by its C=O, –NH2, and aromatic moiety. These results offer significant opportunities for developing nanoparticle-based therapeutics against diseases related to protein aggregation.", "title": "" }, { "docid": "5ce6a32347b868c97f0b0ad4027c9517", "text": "Purpose: We review augmented (AR) and virtual reality (VR) applications in radiotherapy as found in the scientific literature and highlight future developments enabled by the use of small mass-produced devices and portability of techniques developed in other fields to radiotherapy. Analysis: The application of AR and VR within radiotherapy is still in its infancy, with the notable exception of training and teaching applications. The relatively high cost of equipment needed to generate a realistic 3D effect seems one factor that has slowed down its use, but also the sheer amount of image data is relatively recent, were radiotherapy professionals are only beginning to explore how to use this to its full potential. This increased availability of 3D data in radiotherapy will drive the application of AR and VR in radiotherapy to efficiently recognise and extract key features in the data to act on in clinical decision making. Conclusion: The development of small mass-produced tablet devices coming on the market will allow the user to interact with computer-generated information more easily, facilitating the application of AR and VR. The increased connectivity enabling virtual presence of remote multidisciplinary team meetings heralds significant changes to how radiotherapy professionals will work, to the benefit of our patients.", "title": "" }, { "docid": "40577d34e714b9b15eabcea5fd5dabdc", "text": "This paper considers the problem of grasp pose detection in point clouds. We follow a general algorithmic structure that first generates a large set of 6-DOF grasp candidates and then classifies each of them as a good or a bad grasp. Our focus in this paper is on improving the second step by using depth sensor scans from large online datasets to train a convolutional neural network. We propose two new representations of grasp candidates, and we quantify the effect of using prior knowledge of two forms: instance or category knowledge of the object to be grasped, and pretraining the network on simulated depth data obtained from idealized CAD models. Our analysis shows that a more informative grasp candidate representation as well as pretraining and prior knowledge significantly improve grasp detection. We evaluate our approach on a Baxter Research Robot and demonstrate an average grasp success rate of 93% in dense clutter. This is a 20% improvement compared to our prior work.", "title": "" }, { "docid": "f8b487342f4eaa4931f4a65cbc420b89", "text": "Existing sequence prediction methods are mostly concerned with time-independent sequences, in which the actual time span between events is irrelevant and the distance between events is simply the difference between their order positions in the sequence. While this time-independent view of sequences is applicable for data such as natural languages, e.g., dealing with words in a sentence, it is inappropriate and inefficient for many real world events that are observed and collected at unequally spaced points of time as they naturally arise, e.g., when a person goes to a grocery store or makes a phone call. The time span between events can carry important information about the sequence dependence of human behaviors. In this work, we propose a set of methods for using time in sequence prediction. Because neural sequence models such as RNN are more amenable for handling token-like input, we propose two methods for time-dependent event representation, based on the intuition on how time is tokenized in everyday life and previous work on embedding contextualization. We also introduce two methods for using next event duration as regularization for training a sequence prediction model. We discuss these methods based on recurrent neural nets. We evaluate these methods as well as baseline models on five datasets that resemble a variety of sequence prediction tasks. The experiments revealed that the proposed methods offer accuracy gain over baseline models in a range of settings.", "title": "" }, { "docid": "b6f04270b265cd5a0bb7d0f9542168fb", "text": "This paper presents design and manufacturing procedure of a tele-operative rescue robot. First, the general task to be performed by such a robot is defined, and variant kinematic mechanisms to form the basic structure of the robot are discussed. Choosing an appropriate mechanism, geometric dimensions, and mass properties are detailed to develop a dynamics model for the system. Next, the strength of each component is analyzed to finalize its shape. To complete the design procedure, Patran/Nastran was used to apply the finite element method for strength analysis of complicated parts. Also, ADAMS was used to model the mechanisms, where 3D sketch of each component of the robot was generated by means of Solidworks, and several sets of equations governing the dimensions of system were solved using Matlab. Finally, the components are fabricated and assembled together with controlling hardware. Two main processors are used within the control system of the robot. The operator's PC as the master processor and the laptop installed on the robot as the slave processor. The performance of the system was demonstrated in Rescue robot league of RoboCup 2005 in Osaka (Japan) and achieved the 2nd best design award", "title": "" }, { "docid": "d0c004d3d7deb9ba11460e14b0940245", "text": "The ocular artifacts that contaminate the EEG derive from the potential difference between the cornea and the fundus of the eye. This corneofundal or corneoretinal potential can be considered as an equivalent dipole with its positive pole directed toward the cornea. The cornea shows a steady DC potential of approximately +13 mV relative to the forehead. Blink potentials are caused by the eyelids sliding down over the positively charged cornea. The artifacts from eye-movements result from changes in orientation of the corneo-fundal potential. The scalp-distribution of the ocular artifacts can be described in terms of propagation factors — the fraction of the EOG signal at periocular electrodes that is recorded at a particular scalp location. These factors vary with the location of the scalp electrode. Propagation factors for blinks and upward eye-movements are significantly different.", "title": "" }, { "docid": "ee69add1b4fff872daf061983359d847", "text": "This paper proposes a design solution for static eccentricity (SE) in axial flux resolvers (AFRs). There are two definitions for SE in AFRs. The first approach is based on the definition of SE in conventional radial flux resolvers, wherein the rotor axis does not coincide with the stator bore, but the rotor rotates around its own shaft. In other words, it is different in the angular inclination of the rotor and stator axis. Thus, the air gap of the motor is not uniform. In the second approach, which is more common in AFRs, when SE occurs, air-gap length remains uniform. Rotor axis remains parallel with the stator axis, although it does not coincide with the stator bore. This means that there is radial misalignment of the rotor and stator axis. In this paper, both definitions are considered, and an innovative design solution is proposed to decrease the effect of both types of SE in the accuracy of detected position. 3-D time stepping finite-element analysis is used to show the success of proposed designs. Finally, both models were evaluated using experimental results.", "title": "" }, { "docid": "6d44c4244064634deda30a5059acd87e", "text": "Currently, gene sequence genealogies of the Oligotrichea Bütschli, 1889 comprise only few species. Therefore, a cladistic approach, especially to the Oligotrichida, was made, applying Hennig's method and computer programs. Twenty-three characters were selected and discussed, i.e., the morphology of the oral apparatus (five characters), the somatic ciliature (eight characters), special organelles (four characters), and ontogenetic particulars (six characters). Nine of these characters developed convergently twice. Although several new features were included into the analyses, the cladograms match other morphological trees in the monophyly of the Oligotrichea, Halteriia, Oligotrichia, Oligotrichida, and Choreotrichida. The main synapomorphies of the Oligotrichea are the enantiotropic division mode and the de novo-origin of the undulating membranes. Although the sister group relationship of the Halteriia and the Oligotrichia contradicts results obtained by gene sequence analyses, no morphologic, ontogenetic or ultrastructural features were found, which support a branching of Halteria grandinella within the Stichotrichida. The cladistic approaches suggest paraphyly of the family Strombidiidae probably due to the scarce knowledge. A revised classification of the Oligotrichea is suggested, including all sufficiently known families and genera.", "title": "" }, { "docid": "0d8075b26c8e8554ec8eec5f41a73c23", "text": "As robots are going to spread in human society, the study of their appearance becomes a critical matter when assessing robots performance and appropriateness for an application and for the employment in different countries, with different background cultures and religions. Robot appearance categories are commonly divided in anthropomorphic, zoomorphic and functional. In this paper, we offer a theoretical contribution by introducing a new category, called `theomorphic robots', in which robots carry the shape and the identity of a supernatural creature or object within a religion. Discussing the theory of dehumanisation and the different categories of supernatural among different religions, we hypothesise the possible advantages of the theomorphic design for different applications.", "title": "" }, { "docid": "07e93064b1971a32b5c85b251f207348", "text": "With the growing demand on automotive electronics for the advanced driver assistance systems and autonomous driving, the functional safety becomes one of the most important issues in the hardware development. Thus, the safety standard for automotive E/E system, ISO-26262, becomes state-of-the-art guideline to ensure that the required safety level can be achieved. In this study, we base on ISO-26262 to develop a FMEDA-based fault injection and data analysis framework. The main contribution of this study is to effectively reduce the effort for generating FMEDA report which is used to evaluate hardware's safety level based on ISO-26262 standard.", "title": "" }, { "docid": "1c603902fb684005869d19be91970dd4", "text": "Topic: A study to assess the knowledge of Cardiac Nurses about commonly administered drugs in Cardiac Surgical ICU. Nurses are responsible for preparing and administering potent drugs that affects the patient's cardiovascular functions. Nurses should be competent enough in medicine administration to prevent medication errors. Each nurse should be aware of indication, action, contraindications, adverse reactions and interactions of drugs. OBJECTIVES: -1. To identify knowledge about commonly administered drugs in Cardiac Surgical ICU among Cardiac Nurses. 2. To identify the relationship between knowledge level about commonly administered drugs in Cardiac Surgical ICU and selected variables. METHODS: -Pilot study was done in 5 cardiac speciality nursing students, then 25 cardiac nurses were selected randomly from the CSICU including permanent @ temporary registered nurses for the study; Convenient sampling technique was used for selecting the sample. Total period of study was from August 2011 to October 2011. A self-administered questionnaire was used in the form of multiple choices. RESULTS: Study shows that 3% of the sample had poor knowledge, 23% had average knowledge, 57% had fair knowledge and 17% had good knowledge about commonly administered drugs in CSICU. There was no statistically significant difference when comparing the mean knowledge score with age, professional qualification, year of experience and CPCR training programme attended. There was statistically significant higher knowledge score in nurses with increase in ICU experience. CONCLUSION: -Majority of cardiac nurses have above average knowledge about commonly administered drugs in CSICU.", "title": "" }, { "docid": "0869a75f158b04513c848bc7bfb10e37", "text": "Tracking of multiple objects is an important application in AI City geared towards solving salient problems related to safety and congestion in an urban environment. Frequent occlusion in traffic surveillance has been a major problem in this research field. In this challenge, we propose a model-based vehicle localization method, which builds a kernel at each patch of the 3D deformable vehicle model and associates them with constraints in 3D space. The proposed method utilizes shape fitness evaluation besides color information to track vehicle objects robustly and efficiently. To build 3D car models in a fully unsupervised manner, we also implement evolutionary camera self-calibration from tracking of walking humans to automatically compute camera parameters. Additionally, the segmented foreground masks which are crucial to 3D modeling and camera self-calibration are adaptively refined by multiple-kernel feedback from tracking. For object detection/ classification, the state-of-theart single shot multibox detector (SSD) is adopted to train and test on the NVIDIA AI City Dataset. To improve the accuracy on categories with only few objects, like bus, bicycle and motorcycle, we also employ the pretrained model from YOLO9000 with multiscale testing. We combine the results from SSD and YOLO9000 based on ensemble learning. Experiments show that our proposed tracking system outperforms both state-of-the-art of tracking by segmentation and tracking by detection. Keywords—multiple object tracking, constrained multiple kernels, 3D deformable model, camera self-calibration, adaptive segmentation, object detection, object classification", "title": "" }, { "docid": "1c177a7fdbd15e04a6b122a284a9014a", "text": "Malicious software installed on infected computers is a fundamental component of online crime. Malware development thus plays an essential role in the underground economy of cyber-crime. Malware authors regularly update their software to defeat defenses or to support new or improved criminal business models. A large body of research has focused on detecting malware, defending against it and identifying its functionality. In addition to these goals, however, the analysis of malware can provide a glimpse into the software development industry that develops malicious code.\n In this work, we present techniques to observe the evolution of a malware family over time. First, we develop techniques to compare versions of malicious code and quantify their differences. Furthermore, we use behavior observed from dynamic analysis to assign semantics to binary code and to identify functional components within a malware binary. By combining these techniques, we are able to monitor the evolution of a malware's functional components. We implement these techniques in a system we call Beagle, and apply it to the observation of 16 malware strains over several months. The results of these experiments provide insight into the effort involved in updating malware code, and show that Beagle can identify changes to individual malware components.", "title": "" } ]
scidocsrr
8c2d04404d828edaeeb8eee42312bf41
Authentication Protocols for Internet of Things: A Comprehensive Survey
[ { "docid": "4eaa8c1af7a4f6f6c9de1e6de3f2495f", "text": "Technologies to support the Internet of Things are becoming more important as the need to better understand our environments and make them smart increases. As a result it is predicted that intelligent devices and networks, such as WSNs, will not be isolated, but connected and integrated, composing computer networks. So far, the IP-based Internet is the largest network in the world; therefore, there are great strides to connect WSNs with the Internet. To this end, the IETF has developed a suite of protocols and open standards for accessing applications and services for wireless resource constrained networks. However, many open challenges remain, mostly due to the complex deployment characteristics of such systems and the stringent requirements imposed by various services wishing to make use of such complex systems. Thus, it becomes critically important to study how the current approaches to standardization in this area can be improved, and at the same time better understand the opportunities for the research community to contribute to the IoT field. To this end, this article presents an overview of current standards and research activities in both industry and academia.", "title": "" }, { "docid": "955882547c8d7d455f3d0a6c2bccd2b4", "text": "Recently there has been quite a number of independent research activities that investigate the potentialities of integrating social networking concepts into Internet of Things (IoT) solutions. The resulting paradigm, named Social Internet of Things (SIoT), has the potential to support novel applications and networking services for the IoT in more effective and efficient ways. In this context, the main contributions of this paper are the following: i) we identify appropriate policies for the establishment and the management of social relationships between objects in such a way that the resulting social network is navigable; ii) we describe a possible architecture for the IoT that includes the functionalities required to integrate things into a social network; iii) we analyze the characteristics of the SIoT network structure by means of simulations.", "title": "" } ]
[ { "docid": "6c12755ba2580d5d9b794b9a33c0304a", "text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.", "title": "" }, { "docid": "6eb8e1a391398788d9b4be294b8a70d1", "text": "To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.", "title": "" }, { "docid": "455a71e5358d03d5d4f3e7634db85eb2", "text": "Part of Speech (POS) Tagging can be applied by several tools and several programming languages. This work focuses on the Natural Language Toolkit (NLTK) library in the Python environment and the gold standard corpora installable. The corpora and tagging methods are analyzed and compared by using the Python language. Different taggers are analyzed according to their tagging accuracies with data from three different corpora. In this study, we have analyzed Brown, Penn Treebank and NPS Chat corpuses. The taggers we have used for the analysis are; default tagger, regex tagger, n-gram taggers. We have applied all taggers to these three corpuses, resultantly we have shown that whereas Unigram tagger does the best tagging in all corpora, the combination of taggers does better if it is correctly ordered. Additionally, we have seen that NPS Chat Corpus gives different accuracy results than the other two corpuses.", "title": "" }, { "docid": "d62469c5c49269cb7eb1dc379a674c4f", "text": "Augmented Reality (AR) and Mobile Augmented Reality (MAR) applications have gained much research and industry attention these days. The mobile nature of MAR applications limits users’ interaction capabilities such as inputs, and haptic feedbacks. This survey reviews current research issues in the area of human computer interaction for MAR and haptic devices. The survey first presents human sensing capabilities and their applicability in AR applications. We classify haptic devices into two groups according to the triggered sense: cutaneous/tactile: touch, active surfaces, and mid-air; kinesthetic: manipulandum, grasp, and exoskeleton. Due to the mobile capabilities of MAR applications, we mainly focus our study on wearable haptic devices for each category and their AR possibilities. To conclude, we discuss the future paths that haptic feedbacks should follow for MAR applications and their challenges.", "title": "" }, { "docid": "c8c82af8fc9ca5e0adac5b8b6a14031d", "text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.", "title": "" }, { "docid": "7c611108aa760808e6558b86394a5318", "text": "Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure the genome-wide transcriptome of many individual cells in parallel, but results in noisy data with many dropout events. Existing methods to learn molecular signatures from bulk transcriptomic data may therefore not be adapted to scRNA-seq data, in order to automatically classify individual cells into predefined classes. We propose a new method called DropLasso to learn a molecular signature from scRNA-seq data. DropLasso extends the dropout regularisation technique, popular in neural network training, to estimate sparse linear models. It is well adapted to data corrupted by dropout noise, such as scRNA-seq data, and we clarify how it relates to elastic net regularisation. We provide promising results on simulated and real scRNA-seq data, suggesting that DropLasso may be better adapted than standard regularisations to infer molecular signatures from scRNA-seq data. DropLasso is freely available as an R package at https://github.com/jpvert/droplasso", "title": "" }, { "docid": "acc526dd0d86c5bf83034b3cd4c1ea38", "text": "We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.", "title": "" }, { "docid": "da287113f7cdcb8abb709f1611c8d457", "text": "The paper describes a completely new topology for a low-speed, high-torque permanent brushless magnet machine. Despite being naturally air-cooled, it has a significantly higher torque density than a liquid-cooled transverse-flux machine, whilst its power factor is similar to that of a conventional permanent magnet brushless machine. The high torque capability and low loss density are achieved by combining the actions of a speed reducing magnetic gear and a high speed PM brushless machine within a highly integrated magnetic circuit. In this way, the magnetic limit of the machine is reached before its thermal limit. The principle of operation of such a dasiapseudopsila direct-drive machine is described, and measured results from a prototype machine are presented.", "title": "" }, { "docid": "27fa3f76bd1e097afd389582ee929837", "text": "Prevalence of morbid obesity is rising. Along with it, the adipose associated co-morbidities increase - included panniculus morbidus, the end stage of obesity of the abdominal wall. In the course of time panniculus often develop a herniation of bowel. An incarcerated hernia and acute exacerbation of a chronic inflammation of the panniculus must be treated immediately and presents a surgical challenge. The resection of such massive abdominal panniculus presents several technical problems to the surgeon. Preparation of long standing or fixed hernias may require demanding adhesiolysis. The wound created is huge and difficult to manage, and accompanied by considerable complications at the outset. We provide a comprehensive overview of a possible approach for panniculectomy and hernia repair and overlook of the existing literature.", "title": "" }, { "docid": "b24babd50bd6c7592e272f387e89953a", "text": "Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases. Previous sentence level denoise models don’t achieve satisfying performances because they use hard labels which are determined by distant supervision and immutable during training. To this end, we introduce an entity-pair level denoise method which exploits semantic information from correctly labeled entity pairs to correct wrong labels dynamically during training. We propose a joint score function which combines the relational scores based on the entity-pair representation and the confidence of the hard label to obtain a new label, namely a soft label, for certain entity pair. During training, soft labels instead of hard labels serve as gold labels. Experiments on the benchmark dataset show that our method dramatically reduces noisy instances and outperforms the state-of-the-art systems.", "title": "" }, { "docid": "45c13af41bc3d1b5ba5ea678f9b2eb6f", "text": "A new type of mobile robots with the inch worm mechanism is presented in this paper for inspecting pipelines from the outside of pipe surfaces under hostile environments. This robot, Mark 111, is made after the successful investigation of the prototypes, Mark I and 11, which can pass over obstacles on pipelines, such as flanges and T-joints and others. Newly developed robot, Mark 111, can move vertically along the pipeline and move to the adjacent pipeline for the inspection. The sensors, infra ray proximity sensor and ultra sonic sensors and others, are installed to detect these obstacles and can move autonomously controlled by the microprocessor. The control method of this robot can be carried out by the dual control mode proposed in this paper.", "title": "" }, { "docid": "2b4caf3ecdcd78ac57d8acd5788084d2", "text": "In the age of information network explosion, Along with the popularity of the Internet, users can link to all kinds of social networking sites anytime and anywhere to interact and discuss with others. This phenomenon indicates that social networking sites have become a platform for interactions between companies and customers so far. Therefore, with the above through social science and technology development trend arising from current social phenomenon, research of this paper, mainly expectations for analysis by the information of interaction between people on the social network, such as: user clicked fans pages, user's graffiti wall message information, friend clicked fans pages etc. Three kinds of personal information for personal preference analysis, and from this huge amount of personal data to find out corresponding diverse group for personal preference category. We can by personal preference information for diversify personal advertising, product recommendation and other services. The paper at last through the actual business verification, the research can improve website browsing pages growth 11%, time on site growth 15%, site bounce rate dropped 13.8%, product click through rate growth 43%, more fully represents the results of this research fit the use's preference.", "title": "" }, { "docid": "0f39f88747145f730731bc8dd108b3ac", "text": "To cope with increasing amount of cyber threats, organizations need to share cybersecurity information beyond the borders of organizations, countries, and even languages. Assorted organizations built repositories that store and provide XML-based cybersecurity information on the Internet. Among them are NVD [1], OSVDB [2], and JVN [3], and more cybersecurity information from various organizations from various countries will be available in the Internet. However, users are unaware of all of them. To advance information sharing, users need to be aware of them and be capable of identifying and locating cybersecurity information across such repositories by the parties who need that, and then obtaining the information over networks. This paper proposes a discovery mechanism, which identifies and locates sources and types of cybersecurity information and exchanges the information over networks. The mechanism uses the ontology of cybersecurity information [4] to incorporate assorted format of such information so that it can maintain future extensibility. It generates RDF-based metadata from XML-based cybersecurity information through the use of XSLT. This paper also introduces an implementation of the proposed mechanism and discusses extensibility and usability of the proposed mechanism.", "title": "" }, { "docid": "12f79714c374fd7eb90e6a26af1ecbc1", "text": "To contribute to a better understanding of L2 sentence processing, the present study examines how second language (L2) learners parse temporary ambiguous sentences containing relative clauses. Results are reported from both off-line and on-line experiments with three groups of advanced learners of Greek, with Spanish, German or Russian as native language (L1), as well as results from corresponding experiments with a control group of adult native speakers of Greek. We found that despite their native-like mastery of the construction under investigation, the L2 learners showed different relative clause attachment preferences than the native speakers. Moreover, the L2 learners did not exhibit L1-based preferences in L2 Greek, as might be expected if they were directly influenced by attachment preferences from their native language. We suggest that L2 learners integrate information relevant for parsing differently from native speakers, with the L2 learners relying more on lexical cues than the native speakers and less on purely structurally-based parsing strategies. L1 and L2 Sentence Processing 3", "title": "" }, { "docid": "815e0ad06fdc450aa9ba3f56ab19ab05", "text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.", "title": "" }, { "docid": "cd9382ca95b7584695a5ca41f436209f", "text": "This paper presents a novel method for automated extraction of road markings directly from three dimensional (3-D) point clouds acquired by a mobile light detection and ranging (LiDAR) system. First, road surface points are segmented from a raw point cloud using a curb-based approach. Then, road markings are directly extracted from road surface points through multisegment thresholding and spatial density filtering. Finally, seven specific types of road markings are further accurately delineated through a combination of Euclidean distance clustering, voxel-based normalized cut segmentation, large-size marking classification based on trajectory and curb-lines, and small-size marking classification based on deep learning, and principal component analysis (PCA). Quantitative evaluations indicate that the proposed method achieves an average completeness, correctness, and F-measure of 0.93, 0.92, and 0.93, respectively. Comparative studies also demonstrate that the proposed method achieves better performance and accuracy than those of the two existing methods.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "f519e878b3aae2f0024978489db77425", "text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.", "title": "" }, { "docid": "7cbaa0e8549373e3106ab01d5e3b9e71", "text": "A compact asymmetric orthomode transducer (OMT) with high isolation between the vertical and horizontal Ports is developed for the X-band synthetic aperture radar application. The basic idea of the design is to deploy the combined E- and H-plane bends within the common arm. Moreover, an offset between each polarization axis is introduced to enhance the isolation and decrease the size to be around one-third of most of the existing asymmetric OMTs. The OMT achieves better than 22.5-dB matching level and 65-dB isolation level between the two modes. Good agreement is obtained between measurements and full-wave simulations.", "title": "" } ]
scidocsrr
b38823867dfccc34ea52b6018507bd2f
Anatomy of the Third-Party Web Tracking Ecosystem
[ { "docid": "103d6713dd613bfe5a768c60d349bb4a", "text": "Mobile phones and tablets can be considered as the first incarnation of the post-PC era. Their explosive adoption rate has been driven by a number of factors, with the most signifcant influence being applications (apps) and app markets. Individuals and organizations are able to develop and publish apps, and the most popular form of monetization is mobile advertising.\n The mobile advertisement (ad) ecosystem has been the target of prior research, but these works typically focused on a small set of apps or are from a user privacy perspective. In this work we make use of a unique, anonymized data set corresponding to one day of traffic for a major European mobile carrier with more than 3 million subscribers. We further take a principled approach to characterize mobile ad traffic along a number of dimensions, such as overall traffic, frequency, as well as possible implications in terms of energy on a mobile device.\n Our analysis demonstrates a number of inefficiencies in today's ad delivery. We discuss the benefits of well-known techniques, such as pre-fetching and caching, to limit the energy and network signalling overhead caused by current systems. A prototype implementation on Android devices demonstrates an improvement of 50 % in terms of energy consumption for offline ad-sponsored apps while limiting the amount of ad related traffic.", "title": "" } ]
[ { "docid": "68d6d818596518114dc829bb9ecc570f", "text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.", "title": "" }, { "docid": "74e44b88e3bb92b1319a0a08afcc2ae7", "text": "Discriminative learning of the parameters in the naive Bayes model is known to be equivalent to a logistic regression problem. Here we show that the same fact holds for much more general Bayesian network models, as long as the corresponding network structure satisfies a certain graph-theoretic property. The property holds for naive Bayes but also for more complex structures such as tree-augmented naive Bayes (TAN) as well as for mixed diagnostic-discriminative structures. Our results imply that for networks satisfying our property, the conditional likelihood cannot have local maxima so that the global maximum can be found by simple local optimization methods. We also show that if this property does not hold, then in general the conditional likelihood can have local, non-global maxima. We illustrate our theoretical results by empirical experiments with local optimization in a conditional naive Bayes model. Furthermore, we provide a heuristic strategy for pruning the number of parameters and relevant features in such models. For many data sets, we obtain good results with heavily pruned submodels containing many fewer parameters than the original naive Bayes model.", "title": "" }, { "docid": "faa8bb95a4b05bed78dbdfaec1cd147c", "text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.", "title": "" }, { "docid": "9af5406d0148eea660ae4d838c0beb38", "text": "We give a detailed description of the embedding phase of the Hopcroft and Tarjan planarity testing algorithm. The embedding phase runs in linear time. An implementation based on this paper can be found in [MMN].", "title": "" }, { "docid": "2c2bdd7dad5f939e5fa27d925b741efd", "text": "We describe a new approach that improves the training of generative adversarial nets (GANs) for synthesizing diverse images from a text input. Our approach is based on the conditional version of GANs and expands on previous work leveraging an auxiliary task in the discriminator. Our generated images are not limited to certain classes and do not suffer from mode collapse while semantically matching the text input. A key to our training methods is how to form positive and negative training examples with respect to the class label of a given image. Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class. We evaluate our approach using the Oxford-102 flower dataset, adopting the inception score and multi-scale structural similarity index (MS-SSIM) metrics to assess discriminability and diversity of the generated images. The empirical results indicate greater diversity in the generated images, especially when we gradually select more negative training examples closer to a positive example in the semantic space.", "title": "" }, { "docid": "3c778c71f621b2c887dc81e7a919058e", "text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.", "title": "" }, { "docid": "c0f11031f78044075e6e798f8f10e43f", "text": "We investigate the problem of personalized reviewbased rating prediction which aims at predicting users’ ratings for items that they have not evaluated by using their historical reviews and ratings. Most of existing methods solve this problem by integrating topic model and latent factor model to learn interpretable user and items factors. However, these methods cannot utilize word local context information of reviews. Moreover, it simply restricts user and item representations equivalent to their review representations, which may bring some irrelevant information in review text and harm the accuracy of rating prediction. In this paper, we propose a novel Collaborative Multi-Level Embedding (CMLE) model to address these limitations. The main technical contribution of CMLE is to integrate word embedding model with standard matrix factorization model through a projection level. This allows CMLE to inherit the ability of capturing word local context information from word embedding model and relax the strict equivalence requirement by projecting review embedding to user and item embeddings. A joint optimization problem is formulated and solved through an efficient stochastic gradient ascent algorithm. Empirical evaluations on real datasets show CMLE outperforms several competitive methods and can solve the two limitations well.", "title": "" }, { "docid": "f61ea212d71eebf43fd677016ce9770a", "text": "Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MTLfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator’s driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity.", "title": "" }, { "docid": "fff85feeef18f7fa99819711e47e2d39", "text": "This paper presents a robotic vehicle that can be operated by the voice commands given from the user. Here, we use the speech recognition system for giving &processing voice commands. The speech recognition system use an I.C called HM2007, which can store and recognize up to 20 voice commands. The R.F transmitter and receiver are used here, for the wireless transmission purpose. The micro controller used is AT89S52, to give the instructions to the robot for its operation. This robotic car can be able to avoid vehicle collision , obstacle collision and it is very secure and more accurate. Physically disabled persons can use these robotic cars and they can be used in many industries and for many applications Keywords—SpeechRecognitionSystem,AT89S52 micro controller, R. F. Transmitter and Receiver.", "title": "" }, { "docid": "7df95a3da7a000dd72547c99480940b4", "text": "What is it like to have a body? The present study takes a psychometric approach to this question. We collected structured introspective reports of the rubber hand illusion, to systematically investigate the structure of bodily self-consciousness. Participants observed a rubber hand that was stroked either synchronously or asynchronously with their own hand and then made proprioceptive judgments of the location of their own hand and used Likert scales to rate their agreement or disagreement with 27 statements relating to their subjective experience of the illusion. Principal components analysis of this data revealed four major components of the experience across conditions, which we interpret as: embodiment of rubber hand, loss of own hand, movement, and affect. In the asynchronous condition, an additional fifth component, deafference, was found. Secondary analysis of the embodiment of runner hand component revealed three subcomponents in both conditions: ownership, location, and agency. The ownership and location components were independent significant predictors of proprioceptive biases induced by the illusion. These results suggest that psychometric tools may provide a rich method for studying the structure of conscious experience, and point the way towards an empirically rigorous phenomenology.", "title": "" }, { "docid": "57a48d8c45b7ed6bbcde11586140f8b6", "text": "We want to build robots that are useful in unstructured real world applications, such as doing work in the household. Grasping in particular is an important skill in this domain, yet it remains a challenge. One of the key hurdles is handling unexpected changes or motion in the objects being grasped and kinematic noise or other errors in the robot. This paper proposes an approach to learning a closed-loop controller for robotic grasping that dynamically guides the gripper to the object. We use a wrist-mounted sensor to acquire depth images in front of the gripper and train a convolutional neural network to learn a distance function to true grasps for grasp configurations over an image. The training sensor data is generated in simulation, a major advantage over previous work that uses real robot experience, which is costly to obtain. Despite being trained in simulation, our approach works well on real noisy sensor images. We compare our controller in simulated and real robot experiments to a strong baseline for grasp pose detection, and find that our approach significantly outperforms the baseline in the presence of kinematic noise, perceptual errors and disturbances of the object during grasping.", "title": "" }, { "docid": "ca9f1a955ad033e43d25533d37f50b88", "text": "Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.", "title": "" }, { "docid": "9dab240226eee04ae78dc3e2b98cd00d", "text": "The use of whole plants for the synthesis of recombinant proteins has received a great deal of attention recently because of advantages in economy, scalability and safety compared with traditional microbial and mammalian production systems. However, production systems that use whole plants lack several of the intrinsic benefits of cultured cells, including the precise control over growth conditions, batch-to-batch product consistency, a high level of containment and the ability to produce recombinant proteins in compliance with good manufacturing practice. Plant cell cultures combine the merits of whole-plant systems with those of microbial and animal cell cultures, and already have an established track record for the production of valuable therapeutic secondary metabolites. Although no recombinant proteins have yet been produced commercially using plant cell cultures, there have been many proof-of-principle studies and several companies are investigating the commercial feasibility of such production systems.", "title": "" }, { "docid": "9b1643284b783f2947be11f16ae8d942", "text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.", "title": "" }, { "docid": "5e0921d158f0fa7b299fffba52f724d5", "text": "Space syntax derives from a set of analytic measures of configuration that have been shown to correlate well with how people move through and use buildings and urban environments. Space syntax represents the open space of an environment in terms of the intervisibility of points in space. The measures are thus purely configurational, and take no account of attractors, nor do they make any assumptions about origins and destinations or path planning. Space syntax has found that, despite many proposed higher-level cognitive models, there appears to be a fundamental process that informs human and social usage of an environment. In this paper we describe an exosomatic visual architecture, based on space syntax visibility graphs, giving many agents simultaneous access to the same pre-processed information about the configuration of a space layout. Results of experiments in a simulated retail environment show that a surprisingly simple ‘random next step’ based rule outperforms a more complex ‘destination based’ rule in reproducing observed human movement behaviour. We conclude that the effects of spatial configuration on movement patterns that space syntax studies have found are consistent with a model of individual decision behaviour based on the spatial affordances offered by the morphology of the local visual field.", "title": "" }, { "docid": "c1f907a8dc5308e07df76c69fd0deb45", "text": "Emotion regulation has been conceptualized as a process by which individuals modify their emotional experiences, expressions, and physiology and the situations eliciting such emotions in order to produce appropriate responses to the ever-changing demands posed by the environment. Thus, context plays a central role in emotion regulation. This is particularly relevant to the work on emotion regulation in psychopathology, because psychological disorders are characterized by rigid responses to the environment. However, this recognition of the importance of context has appeared primarily in the theoretical realm, with the empirical work lagging behind. In this review, the author proposes an approach to systematically evaluate the contextual factors shaping emotion regulation. Such an approach consists of specifying the components that characterize emotion regulation and then systematically evaluating deviations within each of these components and their underlying dimensions. Initial guidelines for how to combine such dimensions and components in order to capture substantial and meaningful contextual influences are presented. This approach is offered to inspire theoretical and empirical work that it is hoped will result in the development of a more nuanced and sophisticated understanding of the relationship between context and emotion regulation.", "title": "" }, { "docid": "7bdc8d864e370f96475dc7d5078b053c", "text": "Nowadays, there is a trend to design complex, yet secure systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. However, TEE still lacks a precise definition as well as representative building blocks that systematize its design. Existing definitions of TEE are largely inconsistent and unspecific, which leads to confusion in the use of the term and its differentiation from related concepts, such as secure execution environment (SEE). In this paper, we propose a precise definition of TEE and analyze its core properties. Furthermore, we discuss important concepts related to TEE, such as trust and formal verification. We give a short survey on the existing academic and industrial ARM TrustZone-based TEE, and compare them using our proposed definition. Finally, we discuss some known attacks on deployed TEE as well as its wide use to guarantee security in diverse applications.", "title": "" }, { "docid": "425eea5a508dcdd63e0e1ea8e6527a3d", "text": "This technical report describes the multi-label classification (MLC) search space in the MEKA software, including the traditional/meta MLC algorithms, and the traditional/meta/preprocessing single-label classification (SLC) algorithms. The SLC search space is also studied because is part of MLC search space as several methods use problem transformation methods to create a solution (i.e., a classifier) for a MLC problem. This was done in order to understand better the MLC algorithms. Finally, we propose a grammar that formally expresses this understatement.", "title": "" }, { "docid": "d9b7636d566d82f9714272f1c9f83f2f", "text": "OBJECTIVE\nFew studies have investigated the association between religion and suicide either in terms of Durkheim's social integration hypothesis or the hypothesis of the regulative benefits of religion. The relationship between religion and suicide attempts has received even less attention.\n\n\nMETHOD\nDepressed inpatients (N=371) who reported belonging to one specific religion or described themselves as having no religious affiliation were compared in terms of their demographic and clinical characteristics.\n\n\nRESULTS\nReligiously unaffiliated subjects had significantly more lifetime suicide attempts and more first-degree relatives who committed suicide than subjects who endorsed a religious affiliation. Unaffiliated subjects were younger, less often married, less often had children, and had less contact with family members. Furthermore, subjects with no religious affiliation perceived fewer reasons for living, particularly fewer moral objections to suicide. In terms of clinical characteristics, religiously unaffiliated subjects had more lifetime impulsivity, aggression, and past substance use disorder. No differences in the level of subjective and objective depression, hopelessness, or stressful life events were found.\n\n\nCONCLUSIONS\nReligious affiliation is associated with less suicidal behavior in depressed inpatients. After other factors were controlled, it was found that greater moral objections to suicide and lower aggression level in religiously affiliated subjects may function as protective factors against suicide attempts. Further study about the influence of religious affiliation on aggressive behavior and how moral objections can reduce the probability of acting on suicidal thoughts may offer new therapeutic strategies in suicide prevention.", "title": "" } ]
scidocsrr
ff1ac8eb6e6fe1a5c5b4060242cf1ccb
The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments
[ { "docid": "ffef173f4e0c757c6d780d0af5d9c00b", "text": "Minding the Body, the Primordial Communication Medium Embodiment: The Teleology of Interface Design Embodiment: Thinking through our Technologically Extended Bodies User Embodiment and Three Forms in Which the Body \"Feels\" Present in the Virtual Environment Presence: Emergence of a Design Goal and Theoretical Problem Being There: The Sense of Physical Presence in Cyberspace Being with another Body: Designing the Illusion of Social Presence Is This Body Really \"Me\"? Self Presence, Body Schema, Self-consciousness, and Identity The Cyborg's Dilemma Footnotes References About the Author The intrinsic relationship that arises between tools and organs, and one that is to be revealed and emphasized – although it is more one of unconscious discovery than of conscious invention – is that in the tool the human continually produces itself. Since the organ whose utility and power is to be increased is the controlling factor, the appropriate form of a tool can be derived only from that organ. Ernst Kapp, 1877, quoted in [Mitcham, 1994, p. 23] Abstract StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR...StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html How does the changing representation of the body in virtual environments affect the mind? This article considers how virtual reality interfaces are evolving to embody the user progressively. The effect of embodiment on the sensation of physical presence, social presence, and self presence in virtual environments is discussed. The effect of avatar representation on body image and body schema distortion is also considered. The paper ends with the introduction of the cyborg's dilemma, a paradoxical situation in which the development of increasingly \"natural\" and embodied interfaces leads to \"unnatural\" adaptations or changes in the user. In the progressively tighter coupling of user to interface, the user evolves as a cyborg. Minding the Body, the Primordial Communication Medium In the twentieth century we have made a successful transition from the sooty iron surfaces of the industrial revolution to the liquid smooth surfaces of computer graphics. On our computer monitors we may be just beginning to see a reflective surface that looks increasingly like a mirror. In the virtual world that exists on the other side of the mirror's surface we can just barely make out the form of a body that looks like us, like another self. Like Narcissus looking into the pond, we are captured by the experience of this reflection of our bodies. But that reflected body looks increasingly like a cyborg. [2] This article explores an interesting pattern in media interface development that I will call progressive embodiment. Each progressive step in the development of sensor and display technology moves telecommunication technology towards a tighter coupling of the body to the interface. The body is becoming present in both physical space and cyberspace. The interface is adapting to the body; the body is adapting to the interface [(Biocca & Rolland, in press)]. Why is this occurring? One argument is that attempts to optimize the communication bandwidth of distributed, multi-user virtual environments such as social VRML worlds and collaborative virtual environments drives this steady augmentation of the body and the mind [(see Biocca, 1995)]. It has become a key to future stages of interface development. On the other hand, progressive embodiment may be part of a larger pattern, the cultural evolution of humans and communication artifacts towards a mutual integration and greater \"somatic flexibility\" [(Bateson, 1972)]. The pattern of progressive embodiment raises some fundamental and interesting questions. In this article we pause to consider these developments. New media like distributed immersive virtual environments sometimes force us to take a closer look at what is fundamental about communication. Inevitably, theorists interested in the fundamentals of communication return in some way or another to a discussion of the body and the mind. At the birth of new media, theories dwell on human factors in communication [(Biocca, 1995)] and are often more psychological than sociological. For example when radio and film appeared, [Arnheim (1957)] and [Munsterberg (1916)] used the perceptual theories of Gestalt psychology to try to make sense of how each medium affected the senses. In the 1960s McLuhan [(1966; McLuhan & McLuhan, 1988)] refocused our attention on media technology when he assembled a controversial psychological theory to examine electronic media and make pronouncements about the consequences of imbalances in the \"sensorium.\" Page 2 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html Before paper, wires, and silicon, the primordial communication medium is the body. At the center of all communication rests the body, the fleshy gateway to the mind. [Becker & Schoenbach (1989)] argue that \"a veritable 'new mass medium' for some experts, has to address new senses of new combinations of senses. It has to use new channels of information\" (p. 5). In other words, each new medium must somehow engage the body in a new way. But this leads us to ask, are all the media collectively addressing the body in some systematic way? Are media progressively embodying the user? 1.1 The senses as channels to the mind \"Each of us lives within ... the prison of his own brain. Projecting from it are millions of fragile sensory nerve fibers, in groups uniquely adapted to sample the energetic states of the world around us: heat, light, force, and chemical composition. That is all we ever know of it directly; all else is logical inference (1975, p. 131) [(see Sekuler & Blake, 1994 p. 2)]. The senses are the portals to the mind. Sekuler and Blake extend their observation to claim that the senses are \"communication channels to reality.\" Consider for a moment the body as an information acquisition system. As aliens from some distant planet we observe humans and see the body as an array of sensors propelled through space to scan, rub, and grab the environment. In some ways, that is how virtual reality designers see users [(Durlach & Mavor, 1994)]. Many immersive virtual reality designers tend to be implicitly or explicitly Gibsonian: they accept the perspective of the noted perceptual psychologist [J.J. Gibson (1966, 1979)]. Immersive virtual environments are places where vision and the other senses are meant to be active. Users make use of the affordances in the environments from which they perceive the structure of the virtual world in ways similar to the manner they construct the physical world. Through motion and collisions with objects the senses pick up invariances in energy fields flowing over the body's receptors. When we walk or reach for an object in the virtual or physical world, we guide the senses in this exploration of the space in same way that a blind man stretches out a white cane to explore the space while in motion. What we know about the world is embodied, it is constructed from patterns of energy detected by the body. The body is the surface on which all energy fields impinge, on which communication and telecommunication takes form. 1.2 The body as a display device for a mind The body is integrated with the mind as a representational system, or as the neuroscientist, Antonio Damasio, puts it, \"a most curious physiological arrangement ... has turned the brain into the body's captive audience\" [(Damasio, 1994, p. xv)]. In some ways, the body is a primordial display device, a kind of internal mental simulator. The body is a representational medium for the mind. Some would claim that thought is embodied or modeled by the body. Johnson and Lakoff [(Johnson, 1987; Lakoff & Johnson, 1980; Lakoff, 1987)] argue against a view of reasoning as manipulation of prepositional representations (the \"objectives position\"), a tabulation and manipulation of abstract symbols. They might suggest a kind of sensory-based \"image schemata\" that are critical to instantiating mental transformations associated with metaphor and analogy. In a way virtual environments are objectified metaphors and analogies delivered as sensory patterns instantiating \"image schemata.\" In his book, Decartes' Error, the neuroscientist Damasio explains how the body is used as a means of embodying thought: Page 3 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html \"...the body as represented in the brain, may constitute the indispensable frame of reference for the neural processes that we experience as the mind; that our very organism rather than some absolute experiential reality is used as the ground of reference for the constructions we make of the world around us and for the construction of the ever-present sense of subjectivity that is part and parcel of our experiences; that our most refined thoughts and best actions, our greatest joys and deepest sorrows, use the body as a yardstick\" [(Damasio, 1994, p. xvi)]. Damasio's title, Descartes' Error, warns against the misleading tendency to think of the body and mind, reason and emotion, as separate systems. Figure 1. Range of possible input (sensors) and output (effectors) devices for a virtual reality system. Illustrates the pattern of progressive embodiment in virtual reality systems. Source: Biocca & Delaney, 1995 1.3 The body as a communication device The body is also an expressive communication device [(Benthall & Polhemus, 1975)], a social semiotic vehicle for representing mental states (e.g., emotions, observations, plans, etc.)", "title": "" } ]
[ { "docid": "35463670bc80c009f811f97165db33e1", "text": "Framing is the process by which a communication source constructs and defines a social or political issue for its audience. While many observers of political communication and the mass media have discussed framing, few have explicitly described how framing affects public opinion. In this paper we offer a theory of framing effects, with a specific focus on the psychological mechanisms by which framing influences political attitudes. We discuss important conceptual differences between framing and traditional theories of persuasion that focus on belief change. We outline a set of hypotheses about the interaction between framing and audience sophistication, and test these in an experiment. The results support our argument that framing is not merely persuasion, as it is traditionally conceived. We close by reflecting on the various routes by which political communications can influence attitudes.", "title": "" }, { "docid": "4736ae77defc37f96b235b3c0c2e56ff", "text": "This review highlights progress over the past decade in research on the effects of mass trauma experiences on children and youth, focusing on natural disasters, war, and terrorism. Conceptual advances are reviewed in terms of prevailing risk and resilience frameworks that guide basic and translational research. Recent evidence on common components of these models is evaluated, including dose effects, mediators and moderators, and the individual or contextual differences that predict risk or resilience. New research horizons with profound implications for health and well-being are discussed, particularly in relation to plausible models for biological embedding of extreme stress. Strong consistencies are noted in this literature, suggesting guidelines for disaster preparedness and response. At the same time, there is a notable shortage of evidence on effective interventions for child and youth victims. Practical and theory-informative research on strategies to protect children and youth victims and promote their resilience is a global priority.", "title": "" }, { "docid": "f68a287156c2930f302c2ab7f5a2b2a5", "text": "Time series analysis and forecasting future values has been a major research focus since years ago. Time series analysis and forecasting in time series data finds it significance in many applications such as business, stock market and exchange, weather, electricity demand, cost and usage of products such as fuels, electricity, etc. and in any kind of place that has specific seasonal or trendy changes with time. The forecasting of time series data provides the organization with useful information that is necessary for making important decisions. In this paper, a detailed survey of the various techniques applied for forecasting different types of time series dataset is provided. This survey covers the overall forecasting models, the algorithms used within the model and other optimization techniques used for better performance and accuracy. The various performance evaluation parameters used for evaluating the forecasting models are also discussed in this paper. This study gives the reader an idea about the various researches that take place within forecasting using the time series data.", "title": "" }, { "docid": "9cd40ecccdadce54f46885466590303d", "text": "This paper considers the impact of uncertain wind forecasts on the value of stored energy (such as pumped hydro) in a future U.K. system, where wind supplies over 20% of the energy. Providing more of the increased requirement for reserves from standing reserve sources could increase system operation efficiency, enhance wind power absorption, achieve fuel cost savings, and reduce CO2 emissions. Generally, storage-based standing reserve's value is driven by the amount of installed wind and by generation system flexibility. Benefits are more significant in systems with low generation flexibility and with large installed wind capacity. Storage is uniquely able to stock up generated excesses during high-wind/low-demand periods, and subsequently discharge this energy as needed. When storage is combined with standing reserve provided from conventional generation (e.g., open-cycle gas turbines), it is valuable in servicing the highly frequent smaller imbalances", "title": "" }, { "docid": "53be2c41da023d9e2380e362bfbe7cce", "text": "A rich and  exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, Ž nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for Ž tting them. In this article we present two general types of Gibbs samplers that can be used to Ž t posteriors of Bayesian hierarchical models based on stick-breaking priors. The Ž rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We Ž nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use.", "title": "" }, { "docid": "a09248f7c017c532a3a0a580be14ba20", "text": "In the past ten years, the software aging phenomenon has been systematically researched, and recognized by both academic, and industry communities as an important obstacle to achieving dependable software systems. One of its main effects is the depletion of operating system resources, causing system performance degradation or crash/hang failures in running applications. When conducting experimental studies to evaluate the operational reliability of systems suffering from software aging, long periods of runtime are required to observe system failures. Focusing on this problem, we present a systematic approach to accelerate the software aging manifestation to reduce the experimentation time, and to estimate the lifetime distribution of the investigated system. First, we introduce the concept of ¿aging factor¿ that offers a fine control of the aging effects at the experimental level. The aging factors are estimated via sensitivity analyses based on the statistical design of experiments. Aging factors are then used together with the method of accelerated degradation test to estimate the lifetime distribution of the system under test at various stress levels. This approach requires us to estimate a relationship model between stress levels and aging degradation. Such models are called stress-accelerated aging relationships. Finally, the estimated relationship models enable us to estimate the lifetime distribution under use condition. The proposed approach is used in estimating the lifetime distribution of a web server with software aging symptoms. The main result is the reduction of the experimental time by a factor close to 685 in comparison with experiments executed without the use of our technique.", "title": "" }, { "docid": "19ab044ed5154b4051cae54387767c9b", "text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.", "title": "" }, { "docid": "2af3d0d849d50e977864f4085062fdac", "text": "Personal space (PS), the flexible protective zone maintained around oneself, is a key element of everyday social interactions. It, e.g., affects people's interpersonal distance and is thus largely involved when navigating through social environments. However, the PS is regulated dynamically, its size depends on numerous social and personal characteristics and its violation evokes different levels of discomfort and physiological arousal. Thus, gaining more insight into this phenomenon is important. We contribute to the PS investigations by presenting the results of a controlled experiment in a CAVE, focusing on German males in the age of 18 to 30 years. The PS preferences of 27 participants have been sampled while they were approached by either a single embodied, computer-controlled virtual agent (VA) or by a group of three VAs. In order to investigate the influence of a VA's emotions, we altered their facial expression between angry and happy. Our results indicate that the emotion as well as the number of VAs approaching influence the PS: larger distances are chosen to angry VAs compared to happy ones; single VAs are allowed closer compared to the group. Thus, our study is a foundation for social and behavioral studies investigating PS preferences.", "title": "" }, { "docid": "4d5e72046bfd44b9dc06dfd02812f2d6", "text": "Recommender systems in the last decade opened new interactive channels between buyers and sellers leading to new concepts involved in the marketing strategies and remarkable positive gains in online sales. Businesses intensively aim to maintain customer loyalty, satisfaction and retention; such strategic longterm values need to be addressed by recommender systems in a more tangible and deeper manner. The reason behind the considerable growth of recommender systems is for tracking and analyzing the buyer behavior on the one to one basis to present items on the web that meet his preference, which is the core concept of personalization. Personalization is always related to the relationship between item and user leaving out the contextual information about this relationship. User's buying decision is not only affected by the presented item, but also influenced by its price and the context in which the item is presented, such as time or place. Recently, new system has been designed based on the concept of utilizing price personalization in the recommendation process. This system is newly coined as personalized pricing recommender system (PPRS). We propose personalized pricing recommender system with a novel approach of calculating consumer online real value to determine dynamically his personalized discount, which can be generically applied on the normal price of any recommend item through its predefined discount rules.", "title": "" }, { "docid": "784b59ad8529f62004d28ce2473368cb", "text": "In layer-based additive manufacturing (AM), supporting structures need to be inserted to support the overhanging regions. The adding of supporting structures slows down the speed of fabrication and introduces artifacts onto the finished surface. We present an orientation-driven shape optimizer to slim down the supporting structures used in single material-based AM. The optimizer can be employed as a tool to help designers to optimize the original model to achieve a more self-supported shape, which can be used as a reference for their further design. The model to be optimized is first enclosed in a volumetric mesh, which is employed as the domain of computation. The optimizer is driven by the operations of reorientation taken on tetrahedra with ‘facing-down’ surface facets. We formulate the demand on minimizing shape variation as global rigidity energy. The local optimization problem for determining a minimal rotation is analyzed on the Gauss sphere, which leads to a closed-form solution. Moreover, we also extend our approach to create the functions of controlling the deformation and searching for optimal printing directions.", "title": "" }, { "docid": "30ef95dffecc369aabdd0ea00b0ce299", "text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.", "title": "" }, { "docid": "11a1c92620d58100194b735bfc18c695", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "3bb48e5bf7cc87d635ab4958553ef153", "text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: [email protected]", "title": "" }, { "docid": "7d8b256565f44be75e5d23130573580c", "text": "Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.", "title": "" }, { "docid": "06ac34a4909ab44872ee8dc4656b22e7", "text": "Moringa oleifera is an interesting plant for its use in bioactive compounds. In this manuscript, we review studies concerning the cultivation and production of moringa along with genetic diversity among different accessions and populations. Different methods of propagation, establishment and cultivation are discussed. Moringa oleifera shows diversity in many characters and extensive morphological variability, which may provide a resource for its improvement. Great genetic variability is present in the natural and cultivated accessions, but no collection of cultivated and wild accessions currently exists. A germplasm bank encompassing the genetic variability present in Moringa is needed to perform breeding programmes and develop elite varieties adapted to local conditions. Alimentary and medicinal uses of moringa are reviewed, alongside the production of biodiesel. Finally, being that the leaves are the most used part of the plant, their contents in terms of bioactive compounds and their pharmacological properties are discussed. Many studies conducted on cell lines and animals seem concordant in their support for these properties. However, there are still too few studies on humans to recommend Moringa leaves as medication in the prevention or treatment of diseases. Therefore, further studies on humans are recommended.", "title": "" }, { "docid": "021bc2449ca5e4d4e2d836f9872b5e46", "text": "We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.", "title": "" }, { "docid": "0f03c9bc5ff7e6f2a0fccff1f847aa51", "text": "OBJECTIVE\nWe sought to determine the long-term risk of type 2 diabetes following a pregnancy complicated by gestational diabetes mellitus (GDM) and assess what maternal antepartum, postpartum, and neonatal factors are predictive of later development of type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nThis was a retrospective cohort study using survival analysis on 5,470 GDM patients and 783 control subjects who presented for postnatal follow-up at the Mercy Hospital for Women between 1971 and 2003.\n\n\nRESULTS\nRisk of developing diabetes increased with time of follow-up for both groups and was 9.6 times greater for patients with GDM. The cumulative risk of developing type 2 diabetes for the GDM patients was 25.8% at 15 years postdiagnosis. Predictive factors for the development of type 2 diabetes were use of insulin (hazard ratio 3.5), Asian origin compared with Caucasian (2.1), and 1-h blood glucose (1.3 for every 1 mmol increase above 10.1 mmol). BMI was associated with an increased risk of developing type 2 diabetes but did not meet the assumption of proportional hazards required for valid inference when using Cox proportional hazards.\n\n\nCONCLUSIONS\nWhile specific predictive factors for the later development of type 2 diabetes can be identified in the index pregnancy, women with a history of GDM, as a group, are worthy of long-term follow-up to ameliorate their excess cardiovascular risk.", "title": "" }, { "docid": "b83a061d5c4bbd7c38584f2fbf1060e0", "text": "Novelty detection in text streams is a challenging task that emerges in quite a few different scenarii, ranging from email threads to RSS news feeds on a cell phone. An efficient novelty detection algorithm can save the user a great deal of time when accessing interesting information. Most of the recent research for the detection of novel documents in text streams uses either geometric distances or distributional similarities with the former typically performing better but being slower as we need to compare an incoming document with all the previously seen ones. In this paper, we propose a new novelty detection algorithm based on the Inverse Document Frequency (IDF) scoring function. Computing novelty based on IDF enables us to avoid similarity comparisons with previous documents in the text stream, thus leading to faster execution times. At the same time, our proposed approach outperforms several commonly used baselines when applied on a real-world news articles dataset.", "title": "" }, { "docid": "98110985cd175f088204db452a152853", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" }, { "docid": "67e008db2a218b4e307003c919a32a8a", "text": "Relay deployment in Orthogonal Frequency Division Multipl e Access (OFDMA) based cellular networks helps in coverage extension and/or capacity improvement. To quantify capacity improvement, blocking probability of voice traffic is typically calculated using Erlang B formula. This calculation is based on the assumption that all users require same amount of resourc es to satisfy their rate requirement. However, in an OFDMA system, each user requires different number of su bcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Therefore, the Erlang B formula can not be employed to compute blocking p robability in an OFDMA network.In this paper, we determine an analytical expression to comput e the blocking probability of relay based cellular OFDMA network. We determine an expression of the probability distribution of the user’s resource requirement based on its experienced SIR. Then, we classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multi-dimensional system with different classes and evaluate the blocking probabili ty of system using the multi-dimensional Erlang loss formulas. This model is useful in the performance evaluation, design, planning of resources and call admission control of relay based cellular OFDMA networks like LTE.", "title": "" } ]
scidocsrr
5aaaebef8187df53458142b3984c9290
GROUP SPARSE OPTIMIZATION BY ALTERNATING DIRECTION METHOD
[ { "docid": "0e14888a2399bba26ba794e241c5cc5c", "text": "This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M . This problem is closely related to the two existing problems: nonnegative matrix factorization and low-rank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a non-convex constrained least-squares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.", "title": "" }, { "docid": "2d34d9e9c33626727734766a9951a161", "text": "In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basis-pursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the `1-problems. The construction of the algorithms consists of two main steps: (1) to reformulate an `1-problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as first-order primal-dual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several state-of-the-art algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the `1-norm fidelity should be the fidelity of choice in compressive sensing.", "title": "" }, { "docid": "01835769f2dc9391051869374e200a6a", "text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.", "title": "" }, { "docid": "9f21af3bc0955dcd9a05898f943f54ad", "text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.", "title": "" } ]
[ { "docid": "d48a5e5005e757af878e97c0d63a50da", "text": "Measures of Semantic Relatedness determine the degree of relatedness between two words. Most of these measures work only between pairs of words in a single language. We propose a novel method of measuring semantic relatedness between pairs of words in two different languages. This method does not use a parallel corpus but is rather seeded with a set of known translations. For evaluation we construct a cross-language dataset of French-English word pairs with similarity scores. Our new cross-language measure correlates more closely with averaged human scores than our unilingual baselines. 1. Distributional Semantics “You shall know a word by the company it keeps” – Firth (1957) •Construct a word-context matrix • Corpora: French and English Wikipedias • Used POS-tagged words as contexts • Re-weight matrix – Pointwise Mutual Information (PMI) •Cosine similarity •Evaluate correlation on Rubenstein and Goodenough (1965) style dataset", "title": "" }, { "docid": "ee60e645f38ba52fc38ae085214c4562", "text": "As e-commerce sales continue to grow, the associated online fraud remains an attractive source of revenue for fraudsters. These fraudulent activities impose a considerable financial loss to merchants, making online fraud detection a necessity. The problem of fraud detection is concerned with not only capturing the fraudulent activities, but also capturing them as quickly as possible. This timeliness is crucial to decrease financial losses. In this research, a profiling method has been proposed for credit card fraud detection. The focus is on fraud cases which cannot be detected at the transaction level. In the proposed method the patterns inherent in the time series of aggregated daily amounts spent on an individual credit card account has been extracted. These patterns have been used to shorten the time between when a fraud occurs and when it is finally detected, which resulted in timelier fraud detection, improved detection rate and less financial loss.", "title": "" }, { "docid": "153ae44f23ae9ddce7070d5e2a07070e", "text": "Learning a matching function between two text sequences is a long standing problem in NLP research. This task enables many potential applications such as question answering and paraphrase identification. This paper proposes Co-Stack Residual Affinity Networks (CSRAN), a new and universal neural architecture for this problem. CSRAN is a deep architecture, involving stacked (multi-layered) recurrent encoders. Stacked/Deep architectures are traditionally difficult to train, due to the inherent weaknesses such as difficulty with feature propagation and vanishing gradients. CSRAN incorporates two novel components to take advantage of the stacked architecture. Firstly, it introduces a new bidirectional alignment mechanism that learns affinity weights by fusing sequence pairs across stacked hierarchies. Secondly, it leverages a multi-level attention refinement component between stacked recurrent layers. The key intuition is that, by leveraging information across all network hierarchies, we can not only improve gradient flow but also improve overall performance. We conduct extensive experiments on six well-studied text sequence matching datasets, achieving state-of-the-art performance on all.", "title": "" }, { "docid": "d2e19aeb2969991ec18a71c877775c44", "text": "OBJECTIVES\nTo evaluate persistence and adherence to mirabegron and antimuscarinics in Japan using data from two administrative databases.\n\n\nMETHODS\nThe present retrospective study evaluated insurance claims for employees and dependents aged ≤75 years, and pharmacy claims for outpatients. From October 2012 to September 2014, new users of mirabegron or five individual antimuscarinics indicated for overactive bladder in Japan (fesoterodine, imidafenacin, propiverine, solifenacin and tolterodine) were identified and followed for 1 year. Persistence with mirabegron and antimuscarinics were evaluated using Kaplan-Meier methods. Any associations between baseline characteristics (age, sex and previous medication use) and persistence were explored. Adherence was assessed using the medication possession ratio.\n\n\nRESULTS\nIn total, 3970 and 16 648 patients were included from the insurance and pharmacy claims databases, respectively. Mirabegron treatment was associated with longer median persistence compared with antimuscarinics (insurance claims: 44 [95% confidence intervals 37-56] vs 21 [14-28] to 30 [30-33] days, pharmacy claims: 105 [96-113] vs 62 [56-77] to 84 [77-86] days). The results were consistent when patients were stratified by age, sex and previous medication. Persistence rate at 1 year was higher for mirabegron (insurance claims: 14.0% [11.5-16.8%] vs 5.4% [4.1-7.0%] to 9.1% [5.3-14.2%], pharmacy claims: 25.9% [24.6-27.3%] vs 16.3% [14.0-18.6%] to 21.3% [20.2-22.4%]). Compared with each antimuscarinic, a higher proportion of mirabegron-treated patients had medication possession ratios ≥0.8.\n\n\nCONCLUSIONS\nThis large nationwide Japanese study shows that persistence and adherence are greater with mirabegron compared with five antimuscarinics.", "title": "" }, { "docid": "2a54b353758963273464525999c45960", "text": "This work studies comparatively two typical sentence matching tasks: textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework’s effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.", "title": "" }, { "docid": "95afd1d83b5641a7dff782588348d2ec", "text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.", "title": "" }, { "docid": "07a4f79dbe16be70877724b142013072", "text": "Safety planning in the construction industry is generally done separately from the project execution planning. This separation creates difficulties for safety engineers to analyze what, when, why and where safety measures are needed for preventing accidents. Lack of information and integration of available data (safety plan, project schedule, 2D project drawings) during the planning stage often results in scheduling work activities with overlapping space needs that then can create hazardous conditions, for example, work above other crew. These space requirements are time dependent and often neglected due to the manual effort that is required to handle the data. Representation of project-specific activity space requirements in 4D models hardly happen along with schedule and work break-down structure. Even with full cooperation of all related stakeholders, current safety planning and execution still largely depends on manual observation and past experiences. The traditional manual observation is inefficient, error-prone, and the observed result can be easily effected by subjective judgments. This paper will demonstrate the development of an automated safety code checking tool for Building Information Modeling (BIM), work breakdown structure, and project schedules in conjunction with safety criteria to reduce the potential for accidents on construction projects. The automated safety compliance rule checker code builds on existing applications for building code compliance checking, structural analysis, and constructability analysis etc. and also the advances in 4D simulations for scheduling. Preliminary results demonstrate a computer-based automated tool can assist in safety planning and execution of projects on a day to day basis.", "title": "" }, { "docid": "b555bb25c809e47f0f9fc8cec483d794", "text": "The assessment of oxygen saturation in arterial blood by pulse oximetry (SpO₂) is based on the different light absorption spectra for oxygenated and deoxygenated hemoglobin and the analysis of photoplethysmographic (PPG) signals acquired at two wavelengths. Commercial pulse oximeters use two wavelengths in the red and infrared regions which have different pathlengths and the relationship between the PPG-derived parameters and oxygen saturation in arterial blood is determined by means of an empirical calibration. This calibration results in an inherent error, and pulse oximetry thus has an error of about 4%, which is too high for some clinical problems. We present calibration-free pulse oximetry for measurement of SpO₂, based on PPG pulses of two nearby wavelengths in the infrared. By neglecting the difference between the path-lengths of the two nearby wavelengths, SpO₂ can be derived from the PPG parameters with no need for calibration. In the current study we used three laser diodes of wavelengths 780, 785 and 808 nm, with narrow spectral line-width. SaO₂ was calculated by using each pair of PPG signals selected from the three wavelengths. In measurements on healthy subjects, SpO₂ values, obtained by the 780-808 nm wavelength pair were found to be in the normal range. The measurement of SpO₂ by two nearby wavelengths in the infrared with narrow line-width enables the assessment of SpO₂ without calibration.", "title": "" }, { "docid": "f35007fdca9c35b4c243cb58bd6ede7a", "text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).", "title": "" }, { "docid": "9ff2e30bbd34906f6a57f48b1e63c3f1", "text": "In this paper, we extend hidden Markov modeling to speaker-independent phone recognition. Using multiple codebooks of various LPC parameters and discrete HMMs, we obtain a speakerindependent phone recognition accuracy of 58.8% to 73.8% on the TIMTT database, depending on the type of acoustic and language models used. In comparison, the performance of expert spectrogram readers is only 69% without use of higher level knowledge. We also introduce the co-occurrence smoothing algorithm which enables accurate recognition even with very limited training data. Since our results were evaluated on a standard database, they can be used as benchmarks to evaluate future systems. This research was partly sponsored by a National Science Foundation Graduate Fellowship, and by Defense Advanced Research Projects Agency Contract N00039-85-C-0163. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Defense Advanced Research Projects Agency, or the US Government.", "title": "" }, { "docid": "93f8a345f2778f9342474b85f6adce2a", "text": "The Multi-source Integrated Platform for Answering Clinical Questions (MiPACQ) is a QA pipeline that integrates a variety of information retrieval and natural language processing systems into an extensible question answering system. We present the system's architecture and an evaluation of MiPACQ on a human-annotated evaluation dataset based on the Medpedia health and medical encyclopedia. Compared with our baseline information retrieval system, the MiPACQ rule-based system demonstrates 84% improvement in Precision at One and the MiPACQ machine-learning-based system demonstrates 134% improvement. Other performance metrics including mean reciprocal rank and area under the precision/recall curves also showed significant improvement, validating the effectiveness of the MiPACQ design and implementation.", "title": "" }, { "docid": "5b50e84437dc27f5b38b53d8613ae2c7", "text": "We present a practical vision-based robotic bin-picking sy stem that performs detection and 3D pose estimation of objects in an unstr ctu ed bin using a novel camera design, picks up parts from the bin, and p erforms error detection and pose correction while the part is in the gri pper. Two main innovations enable our system to achieve real-time robust a nd accurate operation. First, we use a multi-flash camera that extracts rob ust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliabl y detect objects and estimate their poses. FDCM improves the accuracy of cham fer atching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges , a 3D distance transform, and directional integral images. We empiricall y show that these speedups, combined with the use of bounds in the spatial and h ypothesis domains, give the algorithm sublinear computational compl exity. We also apply our FDCM method to other applications in the context of deformable and articulated shape matching. In addition to significantl y improving upon the accuracy of previous chamfer matching methods in all of t he evaluated applications, FDCM is up to two orders of magnitude faster th an the previous methods.", "title": "" }, { "docid": "0a23995317063e773c3ac69cfd6b8e70", "text": "This paper proposes a temporal tracking algorithm based on Random Forest that uses depth images to estimate and track the 3D pose of a rigid object in real-time. Compared to the state of the art aimed at the same goal, our algorithm holds important attributes such as high robustness against holes and occlusion, low computational cost of both learning and tracking stages, and low memory consumption. These are obtained (a) by a novel formulation of the learning strategy, based on a dense sampling of the camera viewpoints and learning independent trees from a single image for each camera view, as well as, (b) by an insightful occlusion handling strategy that enforces the forest to recognize the object's local and global structures. Due to these attributes, we report state-of-the-art tracking accuracy on benchmark datasets, and accomplish remarkable scalability with the number of targets, being able to simultaneously track the pose of over a hundred objects at 30~fps with an off-the-shelf CPU. In addition, the fast learning time enables us to extend our algorithm as a robust online tracker for model-free 3D objects under different viewpoints and appearance changes as demonstrated by the experiments.", "title": "" }, { "docid": "1d0ca28334542ed2978f986cd3550150", "text": "Recent success of deep learning models for the task of extractive Question Answering (QA) is hinged on the availability of large annotated corpora. However, large domain specific annotated corpora are limited and expensive to construct. In this work, we envision a system where the end user specifies a set of base documents and only a few labelled examples. Our system exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze style questions; and further finetunes the model on the labeled examples. We evaluate our proposed system across three diverse datasets from different domains, and find it to be highly effective with very little labeled data. We attain more than 50% F1 score on SQuAD and TriviaQA with less than a thousand labelled examples. We are also releasing a set of 3.2M cloze-style questions for practitioners to use while building QA systems1.", "title": "" }, { "docid": "91e2dadb338fbe97b009efe9e8f60446", "text": "An efficient smoke detection algorithm on color video sequences obtained from a stationary camera is proposed. Our algorithm considers dynamic and static features of smoke and is composed of basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction; merge slowly moving areas with pixels into blobs; classification of the blobs obtained before. We use adaptive background subtraction at a stage of moving detection. Moving blobs classification is based on optical flow calculation, Weber contrast analysis and takes into account primary direction of smoke propagation. Real video surveillance sequences were used for smoke detection with utilization our algorithm. A set of experimental results is presented in the paper.", "title": "" }, { "docid": "ac0dba7ea5465cf3827d04a15f54a01c", "text": "As humans we live and interact across a wildly diverse set of physical spaces. We each formulate our own personal meaning of place using a myriad of observable cues such as public-private, large-small, daytime-nighttime, loud-quiet, and crowded-empty. Not surprisingly, it is the people with which we share such spaces that dominate our perception of place. Sometimes these people are friends, family and colleagues. More often, and particularly in public urban spaces we inhabit, the individuals who affect us are ones that we repeatedly observe and yet do not directly interact with - our Familiar Strangers. This paper explores our often ignored yet real relationships with Familiar Strangers. We describe several experiments and studies that led to designs for both a personal, body-worn, wireless device and a mobile phone based application that extend the Familiar Stranger relationship while respecting the delicate, yet important, constraints of our feelings and affinities with strangers in pubic places.", "title": "" }, { "docid": "4e182b30dcbc156e2237e7d1d22d5c93", "text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.", "title": "" }, { "docid": "815b641e0d579e30eddd9023b144ef8b", "text": "Much like unpalatable foods, filthy restrooms, and bloody wounds, moral transgressions are often described as \"disgusting.\" This linguistic similarity suggests that there is a link between moral disgust and more rudimentary forms of disgust associated with toxicity and disease. Critics have argued, however, that such references are purely metaphorical, or that moral disgust may be limited to transgressions that remind us of more basic disgust stimuli. Here we review the evidence that moral transgressions do genuinely evoke disgust, even when they do not reference physical disgust stimuli such as unusual sexual behaviors or the violation of purity norms. Moral transgressions presented verbally or visually and those presented as social transactions reliably elicit disgust, as assessed by implicit measures, explicit self-report, and facial behavior. Evoking physical disgust experimentally renders moral judgments more severe, and physical cleansing renders them more permissive or more stringent, depending on the object of the cleansing. Last, individual differences in the tendency to experience disgust toward physical stimuli are associated with variation in moral judgments and morally relevant sociopolitical attitudes. Taken together, these findings converge to support the conclusion that moral transgressions can in fact elicit disgust, suggesting that moral cognition may draw upon a primitive rejection response. We highlight a number of outstanding issues and conclude by describing 3 models of moral disgust, each of which aims to provide an account of the relationship between moral and physical disgust.", "title": "" }, { "docid": "74521ba863fa5c12bcc945c21cc1bd81", "text": "“ Chitin is highly biodegradable, breaking down into simple organic acids like acetate and propionate. As shown by its molecular formula (C8H13NO5), it contains 6-7% nitr ogen, giving it a carbon:nitrogen ratio ideally suited fo r bacterial growth. In addition, as a porous solid, chitin prov ides both a support for bacterial colonization and a long-term source of organic acids (and ultimately hydrogen) that can be utilized by halorespiring bacteria. Therefore it has the pot ential to fill an important niche as a low-cost slow-release source of hydrogen in bioremediation applications for chlorin ated aliphatics”.", "title": "" }, { "docid": "a7edb7ffaee9807d143d0ffb6786fb40", "text": "Recent evidence for the fractionation of the default mode network (DMN) into functionally distinguishable subdivisions with unique patterns of connectivity calls for a reconceptualization of the relationship between this network and self-referential processing. Advances in resting-state functional connectivity analyses are beginning to reveal increasingly complex patterns of organization within the key nodes of the DMN - medial prefrontal cortex and posterior cingulate cortex - as well as between these nodes and other brain systems. Here we review recent examinations of the relationships between the DMN and various aspects of self-relevant and social-cognitive processing in light of emerging evidence for heterogeneity within this network. Drawing from a rapidly evolving social-cognitive neuroscience literature, we propose that embodied simulation and mentalizing are processes which allow us to gain insight into another's physical and mental state by providing privileged access to our own physical and mental states. Embodiment implies that the same neural systems are engaged for self- and other-understanding through a simulation mechanism, while mentalizing refers to the use of high-level conceptual information to make inferences about the mental states of self and others. These mechanisms work together to provide a coherent representation of the self and by extension, of others. Nodes of the DMN selectively interact with brain systems for embodiment and mentalizing, including the mirror neuron system, to produce appropriate mappings in the service of social-cognitive demands.", "title": "" } ]
scidocsrr
c61400fa47baec994f4daa576d1d05af
A Framework for Blockchain-Based Applications
[ { "docid": "5eb65797b9b5e90d5aa3968d5274ae72", "text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.", "title": "" } ]
[ { "docid": "73270e8140d763510d97f7bd2fdd969e", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "bd5d84c9d699080b2d668809626e90fe", "text": "Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rulebased framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as “Good” or “Acceptable” in at least 95% of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time.", "title": "" }, { "docid": "5090070d6d928b83bd22d380f162b0a6", "text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.", "title": "" }, { "docid": "890038199db8a8391d25f1922d18cd62", "text": "In this paper we present a framework for learning a three layered model of human shape, pose and garment deformation. The proposed deformation model provides intuitive control over the three parameters independently, while producing aesthetically pleasing deformations of both the garment and the human body. The shape and pose deformation layers of the model are trained on a rich dataset of full body 3D scans of human subjects in a variety of poses. The garment deformation layer is trained on animated mesh sequences of dressed actors and relies on a novel technique for human shape and posture estimation under clothing. The key contribution of this paper is that we consider garment deformations as the residual transformations between a naked mesh and the dressed mesh of the same subject.", "title": "" }, { "docid": "6f1877d360251e601b3ce63e7b991052", "text": "In education research, there is a widely-cited result called \"Bloom's two sigma\" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction. Tutored students scored in the 95th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a \"one-sigma\" improvement (68th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.", "title": "" }, { "docid": "05ce4be5b7d3c33ba1ebce575aca4fb9", "text": "In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. This paper explores the application of data mining techniques in predicting the likely churners and attribute selection on identifying the churn. It also compares the efficiency of several classifiers and lists their performances for two real telecom datasets.", "title": "" }, { "docid": "31e6da3635ec5f538f15a7b3e2d95e5b", "text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.", "title": "" }, { "docid": "2af4d946d00b37ec0f6d37372c85044b", "text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).", "title": "" }, { "docid": "4af5f2e9b12b4efa43c053fd13f640d0", "text": "The high level of heterogeneity between linguistic annotations usually complic ates the interoperability of processing modules within an NLP pipeline. In this paper, a framework for the interoperation of NLP co mp nents, based on a data-driven architecture, is presented. Here, ontologies of linguistic annotation are employed to provide a conceptu al basis for the tag-set neutral processing of linguistic annotations. The framework proposed here is based on a set of struc tured OWL ontologies: a reference ontology, a set of annotation models which formalize different annotation schemes, and a declarativ e linking between these, specified separately. This modular architecture is particularly scalable and flexible as it allows for the integration of different reference ontologies of linguistic annotations in order to overcome the absence of a consensus for an ontology of ling uistic terminology. Our proposal originates from three lines of research from different fields: research on annotation type systems in UIMA; the ontological architecture OLiA, originally developed for sustainable documentation and annotation-independent corpus browsin g, and the ontologies of the OntoTag model, targeted towards the processing of linguistic annotations in Semantic Web applications. We describ how UIMA annotations can be backed up by ontological specifications of annotation schemes as in the OLiA model, and how these ar e linked to the OntoTag ontologies, which allow for further ontological processing.", "title": "" }, { "docid": "21bb289fb932b23d95fee7d40401d70c", "text": "Mobile phone use is banned or regulated in some circumstances. Despite recognized safety concerns and legal regulations, some people do not refrain from using mobile phones. Such problematic mobile phone use can be considered to be an addiction-like behavior. To find the potential predictors, we examined the correlation between problematic mobile phone use and personality traits reported in addiction literature, which indicated that problematic mobile phone use was a function of gender, self-monitoring, and approval motivation but not of loneliness. These findings suggest that the measurements of these addictive personality traits would be helpful in the screening and intervention of potential problematic users of mobile phones.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "fabd41342129ce739aec41bfa93629c4", "text": "This paper presents a new method for viewpoint invariant pedestrian recognition problem. We use a metric learning framework to obtain a robust metric for large margin nearest neighbor classification with rejection (i.e., classifier will return no matches if all neighbors are beyond a certain distance). The rejection condition necessitates the use of a uniform threshold for a maximum allowed distance for deeming a pair of images a match. In order to handle the rejection case, we propose a novel cost similar to the Large Margin Nearest Neighbor (LMNN) method and call our approach Large Margin Nearest Neighbor with Rejection (LMNN-R). Our method is able to achieve significant improvement over previously reported results on the standard Viewpoint Invariant Pedestrian Recognition (VIPeR [1]) dataset.", "title": "" }, { "docid": "1503fae33ae8609a2193e978218d1543", "text": "The construct of resilience has captured the imagination of researchers across various disciplines over the last five decades (Ungar, 2008a). Despite a growing body of research in the area of resilience, there is little consensus among researchers about the definition and meaning of this concept. Resilience has been used to describe eight kinds of phenomena across different disciplines. These eight phenomena can be divided into two clusters based on the disciplinary origin. The first cluster mainly involves definitions of resilience derived from the discipline of psychology and covers six themes including (i) personality traits, (ii) positive outcomes/forms of adaptation despite high-risk, (iii) factors associated with positive adaptation, (iv) processes, (v) sustained competent functioning/stress resistance, and (vi) recovery from trauma or adversity. The second cluster of definitions is rooted in the discipline of sociology and encompasses two themes including (i) human agency and resistance, and (ii) survival. This paper discusses the inconsistencies in the varied definitions used within the published literature and describes the differing conceptualizations of resilience as well as their limitations. The paper concludes by offering a unifying conceptualization of resilience and by discussing implications for future research on resilience.", "title": "" }, { "docid": "75e9b017838ccfdcac3b85030470a3bd", "text": "The new \"Direct Self-Control\" (DSC) is a simple method of signal processing, which gives converter fed three-phase machines an excellent dynamic performance. To control the torque e.g. of an induction motor it is sufficient to process the measured signals of the stator currents and the total flux linkages only. Optimal performance of drive systems is accomplished in steady state as well as under transient conditions by combination of several two limits controls. The expenses are less than in the case of proposed predictive control systems or FAM, if the converters switching frequency has to be kept minimal.", "title": "" }, { "docid": "79ca2676dab5da0c9f39a0996fcdcfd8", "text": "Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "cfa6b417658cfc1b25200a8ff578ed2c", "text": "The Learning Analytics (LA) discipline analyzes educational data obtained from student interaction with online resources. Most of the data is collected from Learning Management Systems deployed at established educational institutions. In addition, other learning platforms, most notably Massive Open Online Courses such as Udacity and Coursera or other educational initiatives such as Khan Academy, generate large amounts of data. However, there is no generally agreedupon data model for student interactions. Thus, analysis tools must be tailored to each system's particular data structure, reducing their interoperability and increasing development costs. Some e-Learning standards designed for content interoperability include data models for gathering student performance information. In this paper, we describe how well-known LA tools collect data, which we link to how two e-Learning standards - IEEE Standard for Learning Technology and Experience API - define their data models. From this analysis, we identify the advantages of using these e-Learning standards from the point of view of Learning Analytics.", "title": "" }, { "docid": "f5662b8a124ad973084088b64004f3f5", "text": "A metal-frame antenna for the long-term evolution/wireless wide area network (LTE/WWAN) operation in the metal-casing tablet computer is presented. The antenna is formed by using two inverted-F antenna (IFA) structures to provide a low band and a high band to, respectively, cover the LTE/WWAN operation in the 824-960 and 1710-2690 MHz bands. The larger IFA has a longer radiating metal strip for the low band, and the smaller IFA has a shorter radiating metal strip for the high band. The two radiating metal strips are configured to be a portion of the metal frame disposed around the edges of the metal back cover of the tablet computer. The projection of the metal frame lies on the edges of the metal back cover, such that there is no ground clearance between the projection and the metal back cover. Furthermore, the feeding and shorting strips with matching networks therein for the two IFAs are disposed on a small dielectric substrate (feed circuit board), which is separated from the system circuit board and the metal back cover. In this case, there is generally no planar space of the metal back cover and system circuit board occupied, and the antenna can cover the 824-960/1710-2690 MHz bands. Results of the proposed antenna are presented. An extended study is also presented to show that the antenna's low-band coverage can be widened from 824-960 to 698-960 MHz. The wider bandwidth coverage is obtained when a switchable inductor bank is applied in the larger IFA.", "title": "" }, { "docid": "d5c545781cc26242da97f5e75535cd6f", "text": "Kutato is a system that takes as input a database of cases and produces a belief network that captures many of the dependence relations represented by those data. This system incorporates a module for determining the entropy of a belief network and a module for constructing belief networks based on entropy calculations. Kutato constructs an initial belief network in which all variables in the database are assumed to be marginally independent. The entropy of this belief network is calculated, and that arc is added that minimizes the entropy of the resulting belief network. Conditional probabilities for an arc are obtained directly from the database. This process continues until an entropy-based threshold is reached. We have tested the system by generating databases from networks using the probabilistic logic-sampling method, and then using those databases as input to Kutato. The system consistently reproduces the original belief networks with high fidelity.", "title": "" }, { "docid": "d9c5bcd63b0f3d45aa037d7b3e80aad3", "text": "In recent years, type II diabetes has become a serious disease that threaten the health and mind of human. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes a type II diabetes prediction model based on random forest which aims at analyzing some readily available indicators (age, weight, waist, hip, etc.) effects on diabetes and discovering some rules on given data. The method can significantly reduce the risk of disease through digging out a clear and understandable model for type II diabetes from a medical database. Random forest algorithm uses multiple decision trees to train the samples, and integrates weight of each tree to get the final results. The validation results at school of medicine, University of Virginia shows that the random forest algorithm can greatly reduce the problem of over-fitting of the single decision tree, and it can effectively predict the impact of these readily available indicators on the risk of diabetes. Additionally, we get a better prediction accuracy using random forest than using the naive Bayes algorithm, ID3 algorithm and AdaBoost algorithm.", "title": "" }, { "docid": "3e80dc7319f1241e96db42033c16f6b4", "text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.", "title": "" } ]
scidocsrr
d76123f373e28fd6c4fde1108caf3825
A comparison of pilot-aided channel estimation methods for OFDM systems
[ { "docid": "822b3d69fd4c55f45a30ff866c78c2b1", "text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.", "title": "" } ]
[ { "docid": "1803a9dbb7955862c8a4d046f807897a", "text": "Vertebrate animals exploit the elastic properties of their tendons in several different ways. Firstly, metabolic energy can be saved in locomotion if tendons stretch and then recoil, storing and returning elastic strain energy, as the animal loses and regains kinetic energy. Leg tendons save energy in this way when birds and mammals run, and an aponeurosis in the back is also important in galloping mammals. Tendons may have similar energy-saving roles in other modes of locomotion, for example in cetacean swimming. Secondly, tendons can recoil elastically much faster than muscles can shorten, enabling animals to jump further than they otherwise could. Thirdly, tendon elasticity affects the control of muscles, enhancing force control at the expense of position control.", "title": "" }, { "docid": "81fd4801b7dbe39573a44f2af0e94b9a", "text": "In this paper, we propose a conceptual framework for assessing the salience of landmarks for navigation. Landmark salience is derived as a result of the observer’s point of view, both physical and cognitive, the surrounding environment, and the objects contained therein. This is in contrast to the currently held view that salience is an inherent property of some spatial feature. Salience, in our approach, is expressed as a three-valued Saliency Vector. The components that determine this vector are Perceptual Salience, which defines the exogenous (or passive) potential of an object or region for acquisition of visual attention, Cognitive Salience, which is an endogenous (or active) mode of orienting attention, triggered by informative cues providing advance information about the target location, and Contextual Salience, which is tightly coupled to modality and task to be performed. This separation between voluntary and involuntary direction of visual attention in dependence of the context allows defining a framework that accounts for the interaction between observer, environment, and landmark. We identify the low-level factors that contribute to each type of salience and suggest a probabilistic approach for their integration. Finally, we discuss the implications, consider restrictions, and explore the scope of the framework.", "title": "" }, { "docid": "4bac03c1e5c5cad93595dd38954a8a94", "text": "This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present SoPhie; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.", "title": "" }, { "docid": "a872ab9351dc645b5799d576f5f10eb6", "text": "A new framework for advanced manufacturing is being promoted in Germany, and is increasingly being adopted by other countries. The framework represents a coalescing of digital and physical technologies along the product value chain in an attempt to transform the production of goods and services1. It is an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and it is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.) Yet, without up-front efforts to ensure its beneficial, responsible and responsive development, there is a very real danger that this fourth industrial revolution will not only fail to deliver on its promise, but also ultimately increase the very challenges its advocates set out to solve. At its heart, the fourth industrial revolution represents an unprecedented fusion between and across digital, physical and biological technologies, and a resulting anticipated transformation in how products are made and used2. This is already being experienced with the growing Internet of Things, where dynamic information exchanges between networked devices are opening up new possibilities from manufacturing to lifestyle enhancement and risk management. Similarly, a rapid amplification of 3D printing capabilities is now emerging through the convergence of additive manufacturing technologies, online data sharing and processing, advanced materials, and ‘printable’ biological systems. And we are just beginning to see the commercial use of potentially transformative convergence between cloud-based artificial intelligence and open-source hardware and software, to create novel platforms for innovative human–machine interfaces. These and other areas of development only scratch the surface of how convergence is anticipated to massively extend the impacts of the individual technologies it draws on. This is a revolution that comes with the promise of transformative social, economic and environmental advances — from eliminating disease, protecting the environment, and providing plentiful energy, food and water, to reducing inequity and empowering individuals and communities. Yet, the path towards this utopia-esque future is fraught with pitfalls — perhaps more so than with any former industrial revolution. As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties. Within this emerging landscape, cyber security is becoming an increasingly important challenge, as global digital networks open up access to manufacturing processes and connected products across the world. The risks of cyber ‘insecurity’ increase by orders of magnitude as manufacturing becomes more distributed and less conventionally securable. Distributed manufacturing is another likely outcome of the fourth industrial revolution. A powerful fusion between online resources, modular and open-source tech, and point-of-source production devices, such as 3D printers, will increasingly enable entrepreneurs to set up shop almost anywhere. While this could be a boon for local economies, it magnifies the ease with which manufacturing can slip the net of conventional regulation, while still having the ability to have a global impact. These and other challenges reflect a blurring of the line between hardware and software systems that is characteristic of the fourth industrial revolution. We are heading rapidly towards a future where hardware manufacturers are able to grow, crash and evolve physical products with the same speed that we have become accustomed to with software products. Yet, manufacturing regulations remain based on product development cycles that span years, not hours. Anticipating this high-speed future, we are already seeing the emergence of hardware capabilities that can be updated at the push of a button. Tesla Motors, for instance, recently released a software update that added hardware-based ‘autopilot’ capabilities to the company’s existing fleet of model S vehicles3. This early demonstration of the convergence between hardware and software reflects a growing capacity to rapidly change the behaviour of hardware systems through software modifications that lies far beyond the capacity of current regulations to identify, monitor and control. This in turn increases the potential risks to health, safety and the environment, simply because well-intentioned technologies are at some point going to fall through the holes in an increasingly inadequate regulatory net. There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decisionmaking and responsive governance. How vulnerable we will be to unintended and unwanted consequences in this convergent technologies future is unclear. What is clear though is that, without new thinking on risk, resilience and governance, and without rapidly emerging abilities to identify early warnings and take corrective action, the chances of systems based around converging technologies failing fast and failing spectacularly will only increase.", "title": "" }, { "docid": "bd38c54756349c002962d0f25aed8d1b", "text": "Textbook and Color Atlas of Traumatic Injuries to the Teeth encompasses the full scope of acute dental trauma, including all aspects of inter-disciplinary treatment. This fourth edition captures the significant advances which have been made in the subject of dental traumatology, since the publication of the last edition more than a decade ago. The comprehensive nature of the book is designed to appeal to distinguished clinicians and scholars of dental traumatology, whether they be oral surgeons, pediatric dentists, endodontists, or from a related specialist community.", "title": "" }, { "docid": "b49925f5380f695ccc3f9a150030051c", "text": "Understanding the behaviour of algorithms is a key element of computer science. However, this learning objective is not always easy to achieve, as the behaviour of some algorithms is complicated or not readily observable, or affected by the values of their input parameters. To assist students in learning the multilevel feedback queue scheduling algorithm (MLFQ), we designed and developed an interactive visualization tool, Marble MLFQ, that illustrates how the algorithm works under various conditions. The tool is intended to supplement course material and instructions in an undergraduate operating systems course. The main features of Marble MLFQ are threefold: (1) It animates the steps of the scheduling algorithm graphically to allow users to observe its behaviour; (2) It provides a series of lessons to help users understand various aspects of the algorithm; and (3) It enables users to customize input values to the algorithm to support exploratory learning.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "74fb6f153fe8d6f8eac0f18c1040a659", "text": "The DAVID Gene Functional Classification Tool http://david.abcc.ncifcrf.gov uses a novel agglomeration algorithm to condense a list of genes or associated biological terms into organized classes of related genes or biology, called biological modules. This organization is accomplished by mining the complex biological co-occurrences found in multiple sources of functional annotation. It is a powerful method to group functionally related genes and terms into a manageable number of biological modules for efficient interpretation of gene lists in a network context.", "title": "" }, { "docid": "d82553a7bf94647aaf60eb36748e567f", "text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.", "title": "" }, { "docid": "d06f27b688f430acf5652fd4c67905b1", "text": "A comprehensive in vitro study involving antiglycation, antioxidant and anti-diabetic assays was carried out in mature fruits of strawberry. The effect of aqueous extract of mature strawberry fruits on glycation of guanosine with glucose and fructose with or without oxidizing entities like reactive oxygen species was analyzed. Spectral studies showed that glycation and/or fructation of guanosine was significantly inhibited by aqueous extract of strawberry. The UV absorbance of the glycation reactions was found to be maximum at 24 hrs. and decreased consecutively for 48, 72 and 96 hours. Inhibition of oxidative damage due to reactive oxygen species was also observed in presence of the plant extract. To our knowledge, antiglycation activity of strawberry fruit with reference to guanosine is being demonstrated for the first time. To determine the antioxidant activity of the plant extract, in vitro antioxidant enzymes assays (catalase, peroxidase, polyphenol oxidase and ascorbic acid oxidase) and antioxidant assays (DPPH, superoxide anion scavenging activity and xanthine oxidase) were performed. Maximum inhibition activity of 79.36%, 65.62% and 62.78% was observed for DPPH, superoxide anion scavenging and xanthine oxidase, respectively. In antidiabetic assays, IC50 value for alpha – amylase and alpha – glucosidase activity of fruit extract of strawberry was found to be 86.47 ± 1.12μg/ml and 76.83 ± 0.93 μg/ml, respectively. Thus, the aqueous extract of strawberry showed antiglycation, antioxidant and antidiabetic properties indicating that strawberry fruits, as a dietary supplement, may be utilized towards management of diabetes.", "title": "" }, { "docid": "2643c7960df0aed773aeca6e04fde67e", "text": "Many studies utilizing dogs, cats, birds, fish, and robotic simulations of animals have tried to ascertain the health benefits of pet ownership or animal-assisted therapy in the elderly. Several small unblinded investigations outlined improvements in behavior in demented persons given treatment in the presence of animals. Studies piloting the use of animals in the treatment of depression and schizophrenia have yielded mixed results. Animals may provide intangible benefits to the mental health of older persons, such as relief social isolation and boredom, but these have not been formally studied. Several investigations of the effect of pets on physical health suggest animals can lower blood pressure, and dog walkers partake in more physical activity. Dog walking, in epidemiological studies and few preliminary trials, is associated with lower complication risk among patients with cardiovascular disease. Pets may also have harms: they may be expensive to care for, and their owners are more likely to fall. Theoretically, zoonotic infections and bites can occur, but how often this occurs in the context of pet ownership or animal-assisted therapy is unknown. Despite the poor methodological quality of pet research after decades of study, pet ownership and animal-assisted therapy are likely to continue due to positive subjective feelings many people have toward animals.", "title": "" }, { "docid": "1165be411612c7d6c09ec0408ffdeaad", "text": "OBJECTIVES\nTo describe and compare 20 m shuttle run test (20mSRT) performance among children and youth across 50 countries; to explore broad socioeconomic indicators that correlate with 20mSRT performance in children and youth across countries and to evaluate the utility of the 20mSRT as an international population health indicator for children and youth.\n\n\nMETHODS\nA systematic review was undertaken to identify papers that explicitly reported descriptive 20mSRT (with 1-min stages) data on apparently healthy 9-17 year-olds. Descriptive data were standardised to running speed (km/h) at the last completed stage. Country-specific 20mSRT performance indices were calculated as population-weighted mean z-scores relative to all children of the same age and sex from all countries. Countries were categorised into developed and developing groups based on the Human Development Index, and a correlational analysis was performed to describe the association between country-specific performance indices and broad socioeconomic indicators using Spearman's rank correlation coefficient.\n\n\nRESULTS\nPerformance indices were calculated for 50 countries using collated data on 1 142 026 children and youth aged 9-17 years. The best performing countries were from Africa and Central-Northern Europe. Countries from South America were consistently among the worst performing countries. Country-specific income inequality (Gini index) was a strong negative correlate of the performance index across all 50 countries.\n\n\nCONCLUSIONS\nThe pattern of variability in the performance index broadly supports the theory of a physical activity transition and income inequality as the strongest structural determinant of health in children and youth. This simple and cost-effective assessment would be a powerful tool for international population health surveillance.", "title": "" }, { "docid": "af495aaae51ead951246733d088a2a47", "text": "In this paper, we present a novel parallel implementation for training Gradient Boosting Decision Trees (GBDTs) on Graphics Processing Units (GPUs). Thanks to the wide use of the open sourced XGBoost library, GBDTs have become very popular in recent years and won many awards in machine learning and data mining competitions. Although GPUs have demonstrated their success in accelerating many machine learning applications, there are a series of key challenges of developing a GPU-based GBDT algorithm, including irregular memory accesses, many small sorting operations and varying data parallel granularities in tree construction. To tackle these challenges on GPUs, we propose various novel techniques (including Run-length Encoding compression and thread/block workload dynamic allocation, and reusing intermediate training results for efficient gradient computation). Our experimental results show that our algorithm named GPU-GBDT is often 10 to 20 times faster than the sequential version of XGBoost, and achieves 1.5 to 2 times speedup over a 40 threaded XGBoost running on a relatively high-end workstation of 20 CPU cores. Moreover, GPU-GBDT outperforms its CPU counterpart by 2 to 3 times in terms of performance-price ratio.", "title": "" }, { "docid": "b8f81b8274dc466114d945bb3a597fea", "text": "SIGNIFICANCE\nNonalcoholic fatty liver disease (NAFLD), characterized by liver triacylglycerol build-up, has been growing in the global world in concert with the raised prevalence of cardiometabolic disorders, including obesity, diabetes, and hyperlipemia. Redox imbalance has been suggested to be highly relevant to NAFLD pathogenesis. Recent Advances: As a major health problem, NAFLD progresses to the more severe nonalcoholic steatohepatitis (NASH) condition and predisposes susceptible individuals to liver and cardiovascular disease. Although NAFLD represents the predominant cause of chronic liver disorders, the mechanisms of its development and progression remain incompletely understood, even if various scientific groups ascribed them to the occurrence of insulin resistance, dyslipidemia, inflammation, and apoptosis. Nevertheless, oxidative stress (OxS) more and more appears as the most important pathological event during NAFLD development and the hallmark between simple steatosis and NASH manifestation.\n\n\nCRITICAL ISSUES\nThe purpose of this article is to summarize recent developments in the understanding of NAFLD, essentially focusing on OxS as a major pathogenetic mechanism. Various attempts to translate reactive oxygen species (ROS) scavenging by antioxidants into experimental and clinical studies have yielded mostly encouraging results.\n\n\nFUTURE DIRECTIONS\nAlthough augmented concentrations of ROS and faulty antioxidant defense have been associated to NAFLD and related complications, mechanisms of action and proofs of principle should be highlighted to support the causative role of OxS and to translate its concept into the clinic. Antioxid. Redox Signal. 26, 519-541.", "title": "" }, { "docid": "3933d3ae98f7f83e8b501858402bfefe", "text": "The evaluation of the postural control system (PCS) has applications in rehabilitation, sports medicine, gait analysis, fall detection, and diagnosis of many diseases associated with a reduction in balance ability. Standing involves significant muscle use to maintain balance, making standing balance a good indicator of the health of the PCS. Inertial sensor systems have been used to quantify standing balance by assessing displacement of the center of mass, resulting in several standardized measures. Electromyogram (EMG) sensors directly measure the muscle control signals. Despite strong evidence of the potential of muscle activity for balance evaluation, less study has been done on extracting unique features from EMG data that express balance abnormalities. In this paper, we present machine learning and statistical techniques to extract parameters from EMG sensors placed on the tibialis anterior and gastrocnemius muscles, which show a strong correlation to the standard parameters extracted from accelerometer data. This novel interpretation of the neuromuscular system provides a unique method of assessing human balance based on EMG signals. In order to verify the effectiveness of the introduced features in measuring postural sway, we conduct several classification tests that operate on the EMG features and predict significance of different balance measures.", "title": "" }, { "docid": "a212ba02d2546ee33e42fe26f4b05295", "text": "The requirement to operate aircraft in GPS-denied environments can be met by using visual odometry. Aiming at a full-scale aircraft equipped with a high-accuracy inertial navigation system (INS), the proposed method combines vision and the INS for odometry estimation. With such an INS, the aircraft orientation is accurate with low drift, but it contains high-frequency noise that can affect the vehicle motion estimation, causing position estimation to drift. Our method takes the INS orientation as input and estimates translation. During motion estimation, the method virtually rotates the camera by reparametrizing features with their depth direction perpendicular to the ground. This partially eliminates error accumulation in motion estimation caused by the INS high-frequency noise, resulting in a slow drift. We experiment on two hardware configurations in the acquisition of depth for the visual features: 1) the height of the aircraft above the ground is measured by an altimeter assuming that the imaged ground is a local planar patch, and 2) the depth map of the ground is registered with a two-dimensional laser in a push-broom configuration. The method is tested with data collected from a full-scale helicopter. The accumulative flying distance for the overall tests is approximately 78 km. We observe slightly better accuracy with the push-broom laser than the altimeter. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "b91c93a552e7d7cc09d477289c986498", "text": "Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.", "title": "" }, { "docid": "5f49c93d7007f0f14f1410ce7805b29a", "text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.", "title": "" }, { "docid": "a059b3ef66c54ecbe43aa0e8d35b9da8", "text": "Completion of lagging strand DNA synthesis requires processing of up to 50 million Okazaki fragments per cell cycle in mammalian cells. Even in yeast, the Okazaki fragment maturation happens approximately a million times during a single round of DNA replication. Therefore, efficient processing of Okazaki fragments is vital for DNA replication and cell proliferation. During this process, primase-synthesized RNA/DNA primers are removed, and Okazaki fragments are joined into an intact lagging strand DNA. The processing of RNA/DNA primers requires a group of structure-specific nucleases typified by flap endonuclease 1 (FEN1). Here, we summarize the distinct roles of these nucleases in different pathways for removal of RNA/DNA primers. Recent findings reveal that Okazaki fragment maturation is highly coordinated. The dynamic interactions of polymerase δ, FEN1 and DNA ligase I with proliferating cell nuclear antigen allow these enzymes to act sequentially during Okazaki fragment maturation. Such protein-protein interactions may be regulated by post-translational modifications. We also discuss studies using mutant mouse models that suggest two distinct cancer etiological mechanisms arising from defects in different steps of Okazaki fragment maturation. Mutations that affect the efficiency of RNA primer removal may result in accumulation of unligated nicks and DNA double-strand breaks. These DNA strand breaks can cause varying forms of chromosome aberrations, contributing to development of cancer that associates with aneuploidy and gross chromosomal rearrangement. On the other hand, mutations that impair editing out of polymerase α incorporation errors result in cancer displaying a strong mutator phenotype.", "title": "" }, { "docid": "5a7e85bd8df70ab29d7549bed6cf440e", "text": "The surgery-first approach in orthognathic surgery has recently created a broader interest in completely eliminating time-consuming preoperative orthodontic treatment. Available evidence on the surgery-first approach should be appraised to support its use in orthognathic surgery. A MEDLINE search using the keywords \"surgery first\" and \"orthognathic surgery\" was conducted to select studies using the surgery-first approach. We also manually searched the reference list of the selected keywords to include articles not selected by the MEDLINE search. The search identified 18 articles related to the surgery-first approach. There was no randomized controlled clinical trial. Four papers were excluded as the content was only personal opinion or basic scientific research. Three studies were retrospective cohort studies in nature. The other 11 studies were case reports. For skeletal Class III surgical correction, the final long-term outcomes for maxillofacial and dental relationship were not significantly different between the surgery-first approach and the orthodontics-first approach in transverse (e.g., intercanine or intermolar width) dimension, vertical (e.g., anterior open bite, lower anterior facial height) dimension, and sagittal (e.g., anterior-posterior position of pogonion and lower incisors) dimension. Total treatment duration was substantially shorter in cases of surgery-first approach use. In conclusion, most published studies related to the surgery-first approach were mainly on orthognathic correction of skeletal Class III malocclusion. Both the surgery-first approach and orthodontics-first approach had similar long-term outcomes in dentofacial relationship. However, the surgery-first approach had shorter treatment time.", "title": "" } ]
scidocsrr
febe84de83a1796e385505f5cd3d2e56
Brand Name , Sales Promotion and Consumers ’ Online Purchase Intention for Cell-phone Brands
[ { "docid": "3a90b8f46a8db30438ff54e5bd5e6b4c", "text": "To address the lack of systematic research on the nature and effectiveness of online retailing, a conceptual model is proposed which examines the potential influence of atmospheric qualities of a virtual store. The underlying premise is that, given the demonstrated impact of store environment on shopper behaviors and outcomes in a traditional retailing setting, such atmospheric cues are likely to play a role in the online shopping context. A Stimulus–Organism–Response (S–O–R) framework is used as the basis of the model which posits that atmospheric cues of the online store, through the intervening effects of affective and cognitive states, influence the outcomes of online retail shopping in terms of approach/avoidance behaviors. Two individual traits, involvement and atmospheric responsiveness, are hypothesized to moderate the relationship between atmospheric cues and shoppers’ affective and cognitive reactions. Propositions are derived and the research implications of the model are presented. D 2001 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "7c6d1a1b5002e54f8ee28312b2dc25ba", "text": "This study aims to investigate the direct effects of store image and service quality on brand image and purchase intention for a private label brand (PLB). This study also investigates the indirect effects mediated by perceived risk and price consciousness on these relationships. The sample in this study consisted of three hundred and sixty (360) customers of the Watsons and Cosmed chain of drugstores. The pre-test results identified ‘‘Watsons’’ and ‘‘My Beauty Diary’’ as the research brands of the PLB for the two stores, respectively. This study uses LISREL to examine the hypothesized relationships. This study reveals that (1) store image has a direct and positive effect on the purchase intention of the PLB; (2) service quality has a direct and positive effect on the PLB image; (3) the perceived risk of PLB products has a mediating effect on the relationship between the brand image and the consumers purchase intention of the PLB. 2010 Australian and New Zealand Marketing Academy. All rights reserved.", "title": "" }, { "docid": "4b570eb16d263b2df0a8703e9135f49c", "text": "ions. They also presume that consumers carefully calculate the give and get components of value, an assumption that did not hold true for most consumers in the exploratory study. Price as a Quality Indicator Most experimental studies related to quality have focused on price as the key extrinsic quality signal. As suggested in the propositions, price is but one of several potentially useful extrinsic cues; brand name or package may be equally or more important, especially in packaged goods. Further, evidence of a generalized price-perceived quality relationship is inconclusive. Quality research may benefit from a de-emphasis on price as the main extrinsic quality indicator. Inclusion of other important indicators, as well as identification of situations in which each of those indicators is important, may provide more interesting and useful answers about the extrinsic signals consumers use. Management Implications An understanding of what quality and value mean to consumers offers the promise of improving brand positions through more precise market analysis and segmentation, product planning, promotion, and pricing strategy. The model presented here suggests the following strategies that can be implemented to understand and capitalize on brand quality and value. Close the Quality Perception Gap Though managers increasingly acknowledge the importance of quality, many continue to define and measure it from the company's perspective. Closing the gap between objective and perceived quality requires that the company view quality the way the consumer does. Research that investigates which cues are important and how consumers form impressions of qualConsumer Perceptions of Price, Quality, and Value / 17 ity based on those technical, objective cues is necessary. Companies also may benefit from research that identifies the abstract dimensions of quality desired by consumers in a product class. Identify Key Intrinsic and Extrinsic Attribute", "title": "" } ]
[ { "docid": "4ed64bba175a8c1ff5a6c277c62fa9ac", "text": "In a ciphertext policy attribute based encryption system, a user’s private key is associated with a set of attributes (describing the user) and an encrypted ciphertext will specify an access policy over attributes. A user will be able to decrypt if and only if his attributes satisfy the ciphertext’s policy. In this work, we present the first construction of a ciphertext-policy attribute based encryption scheme having a security proof based on a number theoretic assumption and supporting advanced access structures. Previous CP-ABE systems could either support only very limited access structures or had a proof of security only in the generic group model. Our construction can support access structures which can be represented by a bounded size access tree with threshold gates as its nodes. The bound on the size of the access trees is chosen at the time of the system setup. Our security proof is based on the standard Decisional Bilinear Diffie-Hellman assumption.", "title": "" }, { "docid": "2ab32a04c2d0af4a76ad29ce5a3b2748", "text": "The future of solid-state lighting relies on how the performance parameters will be improved further for developing high-brightness light-emitting diodes. Eventually, heat removal is becoming a crucial issue because the requirement of high brightness necessitates high-operating current densities that would trigger more joule heating. Here we demonstrate that the embedded graphene oxide in a gallium nitride light-emitting diode alleviates the self-heating issues by virtue of its heat-spreading ability and reducing the thermal boundary resistance. The fabrication process involves the generation of scalable graphene oxide microscale patterns on a sapphire substrate, followed by its thermal reduction and epitaxial lateral overgrowth of gallium nitride in a metal-organic chemical vapour deposition system under one-step process. The device with embedded graphene oxide outperforms its conventional counterpart by emitting bright light with relatively low-junction temperature and thermal resistance. This facile strategy may enable integration of large-scale graphene into practical devices for effective heat removal.", "title": "" }, { "docid": "089d74cd4c98cf9695a54ea068c7d957", "text": "Wireless Body Area Networks (WBANs) have developed as an effective solution for a wide range of healthcare, military and sports applications. Most of the proposed works studied efficient data collection from individual and traditional WBANs. Cloud computing is a new computing model that is continuously evolving and spreading. This paper presents a novel cloudlet-based efficient data collection system in WBANs. The goal is to have a large scale of monitored data of WBANs to be available at the end user or to the service provider in reliable manner. A prototype of WBANs, including Virtual Machine (VM) and Virtualized Cloudlet (VC) has been proposed for simulation characterizing efficient data collection in WBANs. Using the prototype system, we provide a scalable storage and processing infrastructure for large scale WBANs system. This infrastructure will be efficiently able to handle the large size of data generated by the WBANs system, by storing these data and performing analysis operations on it. The proposed model is fully supporting for WBANs system mobility using cost effective communication technologies of WiFi and cellular which are supported by WBANs and VC systems. This is in contrast of many of available mHealth solutions that is limited for high cost communication technology, such as 3G and LTE. Performance of the proposed prototype is evaluated via an extended version of CloudSim simulator. It is shown that the average power consumption and delay of the collected data is tremendously decreased by increasing the number of VMs and VCs. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4c0c4b68cdfa1cf684eabfa20ee0b88b", "text": "Orthogonal Frequency Division Multiplexing (OFDM) is an attractive technique for wireless communication over frequency-selective fading channels. OFDM suffers from high Peak-to-Average Power Ratio (PAPR), which limits OFDM usage and reduces the efficiency of High Power Amplifier (HPA) or badly degrades BER. Many PAPR reduction techniques have been proposed in the literature. PAPR reduction techniques can be classified into blind receiver and non-blind receiver techniques. Active Constellation Extension (ACE) is one of the best blind receiver techniques. While, Partial Transmit Sequence (PTS) can work as blind / non-blind technique. PTS has a great PAPR reduction gain on the expense of increasing computational complexity. In this paper we combine PTS with ACE in four possible ways to be suitable for blind receiver applications with better performance than conventional methods (i.e. PTS and ACE). Results show that ACE-PTS scheme is the best among others. Expectedly, any hybrid technique has computational complexity larger than that of its components. However, ACE-PTS can be used to achieve the same performance as that of PTS or worthy better, with less number of subblocks (i.e. with less computational complexity) especially in low order modulation techniques (e.g. 4-QAM and 16-QAM). Results show that ACE-PTS with V=8 can perform similar to or better than PTS with V=10 in 16-QAM or 4-QAM, respectively, with 74% and 40.5% reduction in required numbers of additions and multiplications, respectively.", "title": "" }, { "docid": "e051cae09a2a626b9d5259f4371fe67c", "text": "The main aim of this paper is to discuss the Internet of things in wider sense and prominence on protocols, technologies and application along related issues. The main factor IoT concept is the integration of different technologies. The IoT is empowered by the hottest developments in RFID, smart sensors, communication technologies, and Internet protocols. Primary hypothesis is to have smart sensor dealing directly to deliver a class of applications without any external or human participation. Recently development in Internet and smart phone and machine-to-machine M2M technologies can be consider first phase of the IoT. In the coming years IoT is expected to be one of the main hub between various technologies by connecting smart physical objects together and allow different applications in support of smart decision making. In this paper we discuss IoT architecture and technical aspect that relate to IoT. Then, give over view about IoT technologies, protocols and applications and related issues with comparison of other survey papers. Our main aim to provide a framework to researcher and application developer that how different protocols works, over view of some key issues of IoT and the relation between IoT and other embryonic technologies including big data analytics and cloud computing.", "title": "" }, { "docid": "f3781fdcd1e92050c353165f4d7daab3", "text": "Compassion has been suggested to be a strong motivator for prosocial behavior. While research has demonstrated that compassion training has positive effects on mood and health, we do not know whether it also leads to increases in prosocial behavior. We addressed this question in two experiments. In Experiment 1, we introduce a new prosocial game, the Zurich Prosocial Game (ZPG), which allows for repeated, ecologically valid assessment of prosocial behavior and is sensitive to the influence of reciprocity, helping cost, and distress cues on helping behavior. Experiment 2 shows that helping behavior in the ZPG increased in participants who had received short-term compassion training, but not in participants who had received short-term memory training. Interindividual differences in practice duration were specifically related to changes in the amount of helping under no-reciprocity conditions. Our results provide first evidence for the positive impact of short-term compassion training on prosocial behavior towards strangers in a training-unrelated task.", "title": "" }, { "docid": "aff3f2e70cb7f6dbff9dad0881e3e86f", "text": "Knowledge graphs holistically integrate information about entities from multiple sources. A key step in the construction and maintenance of knowledge graphs is the clustering of equivalent entities from different sources. Previous approaches for such an entity clustering suffer from several problems, e.g., the creation of overlapping clusters or the inclusion of several entities from the same source within clusters. We therefore propose a new entity clustering algorithm CLIP that can be applied both to create entity clusters and to repair entity clusters determined with another clustering scheme. In contrast to previous approaches, CLIP not only uses the similarity between entities for clustering but also further features of entity links such as the so-called link strength. To achieve a good scalability we provide a parallel implementation of CLIP based on Apache Flink. Our evaluation for different datasets shows that the new approach can achieve substantially higher cluster quality than previous approaches.", "title": "" }, { "docid": "b0b193c3c72bb1543b62545d496cdbe0", "text": "The Generalized Traveling Salesman Problem is a variation of the well-known Traveling Salesman Problem in which the set of nodes is divided into clusters; the objective is to find a minimum-cost tour passing through one node from each cluster. We present an effective heuristic for this problem. The method combines a genetic algorithm (GA) with a local tour improvement heuristic. Solutions are encoded using random keys, which circumvent the feasibility problems encountered when using traditional GA encodings. On a set of 41 standard test problems with up to 442 nodes, the heuristic found solutions that were optimal in most cases and were within 1% of optimality in all but the largest problems, with computation times generally within 10 seconds for the smaller problems and a few minutes for the larger ones. The heuristic outperforms all other heuristics published to date in both solution quality and computation time.", "title": "" }, { "docid": "44d35096c19c909c00b56a474d00377e", "text": "This paper introduces the large scale visual search algorithm and system infrastructure at Alibaba. The following challenges are discussed under the E-commercial circumstance at Alibaba (a) how to handle heterogeneous image data and bridge the gap between real-shot images from user query and the online images. (b) how to deal with large scale indexing for massive updating data. (c) how to train deep models for effective feature representation without huge human annotations. (d) how to improve the user engagement by considering the quality of the content. We take advantage of large image collection of Alibaba and state-of-the-art deep learning techniques to perform visual search at scale. We present solutions and implementation details to overcome those problems and also share our learnings from building such a large scale commercial visual search engine. Specifically, model and search-based fusion approach is introduced to effectively predict categories. Also, we propose a deep CNN model for joint detection and feature learning by mining user click behavior. The binary index engine is designed to scale up indexing without compromising recall and precision. Finally, we apply all the stages into an end-to-end system architecture, which can simultaneously achieve highly efficient and scalable performance adapting to real-shot images. Extensive experiments demonstrate the advancement of each module in our system. We hope visual search at Alibaba becomes more widely incorporated into today's commercial applications.", "title": "" }, { "docid": "a33d982b4dde7c22ffc3c26214b35966", "text": "Background: In most cases, bug resolution is a collaborative activity among developers in software development where each developer contributes his or her ideas on how to resolve the bug. Although only one developer is recorded as the actual fixer for the bug, the contribution of the developers who participated in the collaboration cannot be neglected.\n Aims: This paper proposes a new approach, called DRETOM (Developer REcommendation based on TOpic Models), to recommending developers for bug resolution in collaborative behavior.\n Method: The proposed approach models developers' interest in and expertise on bug resolving activities based on topic models that are built from their historical bug resolving records. Given a new bug report, DRETOM recommends a ranked list of developers who are potential to participate in and contribute to resolving the new bug according to these developers' interest in and expertise on resolving it.\n Results: Experimental results on Eclipse JDT and Mozilla Firefox projects show that DRETOM can achieve high recall up to 82% and 50% with top 5 and top 7 recommendations respectively.\n Conclusion: Developers' interest in bug resolving activities should be taken into consideration. On condition that the parameter θ of DRETOM is set properly with trials, the proposed approach is practically useful in terms of recall.", "title": "" }, { "docid": "d143e0dadb1b145bb4293024b46c2c8e", "text": "Firms spend billions of dollars developing advertising content, yet there is little field evidence on how much or how it affects demand. We analyze a direct mail field experiment in South Africa implemented by a consumer lender that randomized advertising content, loan price, and loan offer deadlines simultaneously. We find that advertising content significantly affects demand. Although it was difficult to predict ex ante which specific advertising features would matter most in this context, the features that do matter have large effects. Showing fewer example loans, not suggesting a particular use for the loan, or including a photo of an attractive woman increases loan demand by about as much as a 25% reduction in the interest rate. The evidence also suggests that advertising content persuades by appealing “peripherally” to intuition rather than reason. Although the advertising content effects point to an important role for persuasion and related psychology, our deadline results do not support the psychological prediction that shorter deadlines may help overcome time-management problems; instead, demand strongly increases with longer deadlines. Gender Connection Gender Informed Analysis Gender Outcomes Gender disaggregated access to credit IE Design Randomized Control Trial Intervention The study uses a large-scale direct-mail field experiment to study the effects of advertising content on real decisions, involving nonnegligible sums, among experienced decision makers. A consumer lender in South Africa randomized advertising content and the interest rate in actual offers to 53,000 former clients. The variation in advertising content comes from eight “features” that varied the presentation of the loan offer. We worked together with the lender to create six features relevant to the extensive literature (primarily from laboratory experiments in psychology and decision sciences) on how “frames” and “cues” may affect choices. Specifically, mailers varied in whether they included a person’s photograph on the letter, suggestions for how to use the loan proceeds, a large or small table of example loans, information about the interest rate as well as the monthly payments, a comparison to competitors’ interest rates, and mention of a promotional raffle for a cell phone. Mailers also included two features that were the lender’s choice, rather than motivated by a body of psychological evidence: reference to the interest rate as “special” or P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed enGender Impact: The World Bank’s Gender Impact Evaluation Database Last updated: 14 August 2013 2 “low,” and mention of speaking the local language. Our research design enables us to estimate demand sensitivity to advertising content and to compare it directly to price sensitivity. An additional randomization of the offer expiration date also allows us to study demand sensitivity to deadlines. Intervention Period The bank offered loans with repayment periods ranging from 4 to 18 months. Deadlines for response were randomly allocated from 2 weeks to 6 weeks. Sample population 5194 formers clients who had borrowed from the money-lender in the previous 24 months. Comparison conditions There are six different features of the pamphlet that were randomized. There was no control group. Unit of analysis Individual borrower Evaluation Period The study evaluates responses to the mail advertising experiment. Results Simplifying the loan description led to a significant increase in takeup of the loan equivalent to a 200 basis point reduction in interest rates. Including a comparison feature in the letter had no impact on takeup. The race of the person featured on the photo had no impact on takeup of the loan. The gender of the person featured led to a significant increase of takeup when a woman was featuredthe effect size was also similar to a 200 basis point reduction in the interest rate. Male clients were much more likely to takeup the loan when a woman was featured. Featuring a man did not affect the decision making of female clients. Including a promotional giveaway and a suggestion phone call both significantly increased takeup. Primary study limitations Because of the large amount of variations, the sample size only allowed for the identification of economically large effects. Funding Source National Science Foundation, The Bill and Melinda Gates Foundation, USAID/BASIS Reference(s) Bertrand, M., Karlan, D., Mullainathan, S., Shafir, E., & Zinman, J. (2010) \"What's advertising content worth? Evidence from a consumer credit marketing field experiment,\" The Quarterly Journal of Economics, 125(1), 263-306. Link to Studies http://qje.oxfordjournals.org/content/125/1/263.short Microdata", "title": "" }, { "docid": "7d0c25928504a9cb5879204eb3eeaf50", "text": "This article is the second of a two-part tutorial on visual servo control. In this tutorial, we have only considered velocity controllers. It is convenient for most of classical robot arms. However, the dynamics of the robot must of course be taken into account for high speed task, or when we deal with mobile nonholonomic or underactuated robots. As for the sensor, geometrical features coming from a classical perspective camera is considered. Features related to the image motion or coming from other vision sensors necessitate to revisit the modeling issues to select adequate visual features. Finally, fusing visual features with data coming from other sensors at the level of the control scheme will allow to address new research topics", "title": "" }, { "docid": "9e5aa162d1eecefe11abe5ecefbc11e3", "text": "Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.", "title": "" }, { "docid": "59c2e1dcf41843d859287124cc655b05", "text": "Atherosclerotic cardiovascular disease (ASCVD) is the most common cause of death in most Western countries. Nutrition factors contribute importantly to this high risk for ASCVD. Favourable alterations in diet can reduce six of the nine major risk factors for ASCVD, i.e. high serum LDL-cholesterol levels, high fasting serum triacylglycerol levels, low HDL-cholesterol levels, hypertension, diabetes and obesity. Wholegrain foods may be one the healthiest choices individuals can make to lower the risk for ASCVD. Epidemiological studies indicate that individuals with higher levels (in the highest quintile) of whole-grain intake have a 29 % lower risk for ASCVD than individuals with lower levels (lowest quintile) of whole-grain intake. It is of interest that neither the highest levels of cereal fibre nor the highest levels of refined cereals provide appreciable protection against ASCVD. Generous intake of whole grains also provides protection from development of diabetes and obesity. Diets rich in wholegrain foods tend to decrease serum LDL-cholesterol and triacylglycerol levels as well as blood pressure while increasing serum HDL-cholesterol levels. Whole-grain intake may also favourably alter antioxidant status, serum homocysteine levels, vascular reactivity and the inflammatory state. Whole-grain components that appear to make major contributions to these protective effects are: dietary fibre; vitamins; minerals; antioxidants; phytosterols; other phytochemicals. Three servings of whole grains daily are recommended to provide these health benefits.", "title": "" }, { "docid": "17f8e69318139b372f4b5a5745702d4d", "text": "In light of proposals to improve diets by shifting food prices, it is important to understand how price changes affect demand for various foods. We reviewed 160 studies on the price elasticity of demand for major food categories to assess mean elasticities by food category and variations in estimates by study design. Price elasticities for foods and nonalcoholic beverages ranged from 0.27 to 0.81 (absolute values), with food away from home, soft drinks, juice, and meats being most responsive to price changes (0.7-0.8). As an example, a 10% increase in soft drink prices should reduce consumption by 8% to 10%. Studies estimating price effects on substitutions from unhealthy to healthy food and price responsiveness among at-risk populations are particularly needed.", "title": "" }, { "docid": "94bb7d2329cbea921c6f879090ec872d", "text": "We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. An interactive version of this paper is available at https://worldmodels.github.io", "title": "" }, { "docid": "9f5f79a19d3a181f5041a7b5911db03a", "text": "BACKGROUND\nNucleoside analogues against herpes simplex virus (HSV) have been shown to suppress shedding of HSV type 2 (HSV-2) on genital mucosal surfaces and may prevent sexual transmission of HSV.\n\n\nMETHODS\nWe followed 1484 immunocompetent, heterosexual, monogamous couples: one with clinically symptomatic genital HSV-2 and one susceptible to HSV-2. The partners with HSV-2 infection were randomly assigned to receive either 500 mg of valacyclovir once daily or placebo for eight months. The susceptible partner was evaluated monthly for clinical signs and symptoms of genital herpes. Source partners were followed for recurrences of genital herpes; 89 were enrolled in a substudy of HSV-2 mucosal shedding. Both partners were counseled on safer sex and were offered condoms at each visit. The predefined primary end point was the reduction in transmission of symptomatic genital herpes.\n\n\nRESULTS\nClinically symptomatic HSV-2 infection developed in 4 of 743 susceptible partners who were given valacyclovir, as compared with 16 of 741 who were given placebo (hazard ratio, 0.25; 95 percent confidence interval, 0.08 to 0.75; P=0.008). Overall, acquisition of HSV-2 was observed in 14 of the susceptible partners who received valacyclovir (1.9 percent), as compared with 27 (3.6 percent) who received placebo (hazard ratio, 0.52; 95 percent confidence interval, 0.27 to 0.99; P=0.04). HSV DNA was detected in samples of genital secretions on 2.9 percent of the days among the HSV-2-infected (source) partners who received valacyclovir, as compared with 10.8 percent of the days among those who received placebo (P<0.001). The mean rates of recurrence were 0.11 per month and 0.40 per month, respectively (P<0.001).\n\n\nCONCLUSIONS\nOnce-daily suppressive therapy with valacyclovir significantly reduces the risk of transmission of genital herpes among heterosexual, HSV-2-discordant couples.", "title": "" }, { "docid": "c0d2a2b5d9251bdd4fc65532abe3a152", "text": "BACKGROUND\nTo improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets.\n\n\nOBJECTIVE\nThis study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data.\n\n\nMETHODS\nThis study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care management allocation and pilot one model with care managers; and (3) perform simulations to estimate the impact of adopting Auto-ML on US patient outcomes.\n\n\nRESULTS\nWe are currently writing Auto-ML's design document. We intend to finish our study by around the year 2022.\n\n\nCONCLUSIONS\nAuto-ML will generalize to various clinical prediction/classification problems. With minimal help from data scientists, health care researchers can use Auto-ML to quickly build high-quality models. This will boost wider use of machine learning in health care and improve patient outcomes.", "title": "" }, { "docid": "d74c287c60b404961fc1775ddffc7d46", "text": "Numerous dietary compounds, ubiquitous in fruits, vegetables and spices have been isolated and evaluated during recent years for their therapeutic potential. These compounds include flavonoid and non-flavonoid polyphenols, which describe beneficial effects against a variety of ailments. The notion that these plant products have health promoting effects emerged because their intake was related to a reduced incidence of cancer, cardiovascular, neurological, respiratory, and age-related diseases. Exposure of the body to a stressful environment challenges cell survival and increases the risk of chronic disease developing. The polyphenols afford protection against various stress-induced toxicities through modulating intercellular cascades which inhibit inflammatory molecule synthesis, the formation of free radicals, nuclear damage and induce antioxidant enzyme expression. These responses have the potential to increase life expectancy. The present review article focuses on curcumin, resveratrol, and flavonoids and seeks to summarize their anti-inflammatory, cytoprotective and DNA-protective properties.", "title": "" }, { "docid": "9643a30947fa302d4befd2d7b8b176a6", "text": "To ensure that biobanks reach their full potential, better engagement of the public is needed. The authors argue that the principle of reciprocity should be at the core of these efforts.", "title": "" } ]
scidocsrr
3c873fe84f598471dde2ed6ce8fb0e78
Identifying the characteristics of vulnerable code changes: an empirical study
[ { "docid": "6cf18bea11ea8e95f24b7db69d3924e2", "text": "Experimentation in software engineering is necessar y but difficult. One reason is that there are a lar ge number of context variables, and so creating a cohesive under standing of experimental results requires a mechani sm for motivating studies and integrating results. It requ ires a community of researchers that can replicate studies, vary context variables, and build models that represent the common observations about the discipline. This paper discusses the experience of the authors, based upon a c llection of experiments, in terms of a framewo rk f r organizing sets of related studies. With such a fra mework, experiments can be viewed as part of common families of studies, rather than being isolated events. Common families of studies can contribute to important and relevant hypotheses that may not be suggested by individual experiments. A framework also facilitates building knowledge in an incremental manner through the replication of experiments within families of studies. To support the framework, this paper discusses the exp riences of the authors in carrying out empirica l studies, with specific emphasis on persistent problems encountere d in xperimental design, threats to validity, crit eria for evaluation, and execution of experiments in the dom ain of software engineering.", "title": "" } ]
[ { "docid": "f29d0ea5ff5c96dadc440f4d4aa229c6", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "d2a89459ca4a0e003956d6fe4871bb34", "text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.", "title": "" }, { "docid": "8c36e881f03a1019158cdae2e5de876c", "text": "The projects with embedded systems are used for many different purposes, being a major challenge for the community of developers of such systems. As we benefit from technological advances the complexity of designing an embedded system increases significantly. This paper presents GERSE, a guideline to requirements elicitation for embedded systems. Despite of advances in the area of embedded systems, there is a shortage of requirements elicitation techniques that meet the particularities of this area. The contribution of GERSE is to improve the capture process and organization of the embedded systems requirements.", "title": "" }, { "docid": "5fd97a266042ba119976c43e47dbe2ab", "text": "The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi-structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways. Different similarity measures have been extended and also in some context semantic issues have been taken into account. In this context, we review different XML clustering methods with considering different representation methods such as tree based and vector based with use of different similarity measures. We also propose taxonomy for these proposed methods.", "title": "" }, { "docid": "dcd9a430a69fc3a938ea1068273627ff", "text": "Background Nursing theory should provide the principles that underpin practice and help to generate further nursing knowledge. However, a lack of agreement in the professional literature on nursing theory confuses nurses and has caused many to dismiss nursing theory as irrelevant to practice. This article aims to identify why nursing theory is important in practice. Conclusion By giving nurses a sense of identity, nursing theory can help patients, managers and other healthcare professionals to recognise the unique contribution that nurses make to the healthcare service ( Draper 1990 ). Providing a definition of nursing theory also helps nurses to understand their purpose and role in the healthcare setting.", "title": "" }, { "docid": "b56f65fd08c8b6a9fe9ff05441ff8734", "text": "While symbolic parsers can be viewed as deduction systems, t his view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an O(n3) time bound for arbitrary PCFGs, while preserving as much of t he flexibility of symbolic chart parsers as allowed by the inher ent ordering of probabilistic dependencies.", "title": "" }, { "docid": "41e03f4540a090a9dc4e9551aad99fb6", "text": "• Unlabeled: Context constructed without dependency labels • Simplified: Functionally similar dependency labels are collapsed • Basic: Standard dependency parse • Enhanced and Enhanced++: Dependency trees augmented (e.g., new edges between modifiers and conjuncts with parents’ labels) • Universal Dependencies (UD): Cross-lingual • Stanford Dependencies (SD): English-tailored • Prior work [1] has shown that embeddings trained using dependency contexts distinguish related words better than similar words. • What effects do decisions made with embeddings have on the characteristics of the word embeddings? • Do Universal Dependency (UD) embeddings capture different characteristics than English-tailored Stanford Dependency (SD) embeddings?", "title": "" }, { "docid": "020fe2e94d306482399b4d1aaa083e5f", "text": "A key analytical task across many domains is model building and exploration for predictive analysis. Data is collected, parsed and analyzed for relationships, and features are selected and mapped to estimate the response of a system under exploration. As social media data has grown more abundant, data can be captured that may potentially represent behavioral patterns in society. In turn, this unstructured social media data can be parsed and integrated as a key factor for predictive intelligence. In this paper, we present a framework for the development of predictive models utilizing social media data. We combine feature selection mechanisms, similarity comparisons and model cross-validation through a variety of interactive visualizations to support analysts in model building and prediction. In order to explore how predictions might be performed in such a framework, we present results from a user study focusing on social media data as a predictor for movie box-office success.", "title": "" }, { "docid": "0f9cc52899c7e25a17bb372977d46834", "text": "In modeling and rendering of complex procedural terrains the extraction of isosurfaces is an important part. In this paper we introduce an approach to generate high-quality isosurfaces from regular grids at interactive frame rates. The surface extraction is a variation of Dual Marching Cubes and designed as a set of well-balanced parallel computation kernels. In contrast to a straightforward parallelization we generate a quadrilateral mesh with full connectivity information and 1-ring vertex neighborhood. We use this information to smooth the extracted mesh and to approximate the smooth subdivision surface for detail tessellation. Both improve the visual fidelity when modeling procedural terrains interactively. Moreover, our extraction approach is generally applicable, for example in the field of volume visualization.", "title": "" }, { "docid": "5804eb5389b02f2f6c5692fe8f427501", "text": "reflection-type phase shifter with constant insertion loss over a wide relative phase-shift range is presented. This important feature is attributed to the salient integration of an impedance-transforming quadrature coupler with equalized series-resonated varactors. The impedance-transforming quadrature coupler is used to increase the maximal relative phase shift for a given varactor with a limited capacitance range. When the phase is tuned, the typical large insertion-loss variation of the phase shifter due to the varactor parasitic effect is minimized by shunting the series-resonated varactor with a resistor Rp. A set of closed-form equations for predicting the relative phase shift, insertion loss, and insertion-loss variation with respect to the quadrature coupler and varactor parameters is derived. Three phase shifters were implemented with a silicon varactor of a restricted capacitance range of Cv,min = 1.4 pF and Cv,max = 8 pF, wherein the parasitic resistance is close to 2 Omega. The measured insertion-loss variation is 0.1 dB over the relative phase-shift tuning range of 237deg at 2 GHz and the return losses are better than 20 dB, excellently agreeing with the theoretical and simulated results.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "51f9661061bf69f8d9303101c00558ec", "text": "In this paper we introduce an architecture maturity model for the domain of enterprise architecture. The model differs from other existing models in that it departs from the standard 5-level approach. It distinguishes 18 factors, called key areas, which are relevant to developing an architectural practice. Each key area has its own maturity development path that is balanced against the maturity development paths of the other key areas. Two real-life case studies are presented to illustrate the use of the model. Usage of the model in these cases shows that the model delivers recognizable results, that the results can be traced back to the basic approach to architecture taken by the organizations investigated and that the key areas chosen bear relevance to the architectural practice of the organizations. 1 MATURITY IN ENTERPRISE", "title": "" }, { "docid": "869ad7b6bf74f283c8402958a6814a21", "text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.", "title": "" }, { "docid": "d8c45560377ac2774b1bbe8b8a61b1fb", "text": "Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.", "title": "" }, { "docid": "0f659ff5414e75aefe23bb85127d93dd", "text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.", "title": "" }, { "docid": "56a072fc480c64e6a288543cee9cd5ac", "text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.", "title": "" }, { "docid": "ac5f518cbd783060af1cf6700b994469", "text": "Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.", "title": "" }, { "docid": "e6d9dac0995f9cf711ee50b736a9832d", "text": "Reversing with a dolly steered trailer configuration is a hard task for any driver without extensive training. In this work we present a motion planning and control framework that can be used to automatically plan and execute complicated manoeuvres. The unstable dynamics of the reversing general 2-trailer configuration with off-axle hitching is first stabilised by an LQ-controller and then a pure pursuit path tracker is used on a higher level giving a cascaded controller that can track piecewise linear reference paths. This controller together with a kinematic model of the trailer configuration is then used for forward simulations within a Closed-Loop Rapidly Exploring Random Tree framework to generate motion plans that are not only kinematically feasible but also include the limitations of the controller's tracking performance when reversing. The approach is evaluated over a series of Monte Carlo simulations on three different scenarios and impressive success rates are achieved. Finally the approach is successfully tested on a small scale test platform where the motion plan is calculated and then sent to the platform for execution.", "title": "" }, { "docid": "fe62e3a9acfe5009966434aa1f39099d", "text": "Previous studies have found a subgroup of people with autism or Asperger Syndrome who pass second-order tests of theory of mind. However, such tests have a ceiling in developmental terms corresponding to a mental age of about 6 years. It is therefore impossible to say if such individuals are intact or impaired in their theory of mind skills. We report the performance of very high functioning adults with autism or Asperger Syndrome on an adult test of theory of mind ability. The task involved inferring the mental state of a person just from the information in photographs of a person's eyes. Relative to age-matched normal controls and a clinical control group (adults with Tourette Syndrome), the group with autism and Asperger Syndrome were significantly impaired on this task. The autism and Asperger Syndrome sample was also impaired on Happé's strange stories tasks. In contrast, they were unimpaired on two control tasks: recognising gender from the eye region of the face, and recognising basic emotions from the whole face. This provides evidence for subtle mindreading deficits in very high functioning individuals on the autistic continuum.", "title": "" }, { "docid": "b6012b1b5e74825269f9cf16e2f3e6f0", "text": "GPS enables a management to maintain Staff attendance and employee registration through mobile application, this application facilitates the staffs to login through mobile phone and track other staff members' whereabouts through mobile phone. In the present scenario manual registration through biometric systems is commonly in practice. The staff will be kept on informed about their attendance constantly by the admin when they login and log out so that the staff can keep a track on their attendance by using this application. The admin can track the location of any staff member using latitude, longitude and IMSI number.", "title": "" } ]
scidocsrr
c30cf761e7e620c057aa7ef49cdcb6bd
Performance Analysis of Multi-Hop Underwater Wireless Optical Communication Systems (Extended Version)
[ { "docid": "54b88e4c9e0bc31667e720f5f04c7f83", "text": "In clean ocean water, the performance of a underwater optical communication system is limited mainly by oceanic turbulence, which is defined as the fluctuations in the index of refraction resulting from temperature and salinity fluctuations. In this paper, using the refractive index spectrum of oceanic turbulence under weak turbulence conditions, we carry out, for a horizontally propagating plane wave and spherical wave, analysis of the aperture-averaged scintillation index, the associated probability of fade, mean signal-to-noise ratio, and mean bit error rate. Our theoretical results show that for various values of the rate of dissipation of mean squared temperature and the temperature-salinity balance parameter, the large-aperture receiver leads to a remarkable decrease of scintillation and consequently a significant improvement on the system performance. Such an effect is more noticeable in the plane wave case than in the spherical wave case.", "title": "" } ]
[ { "docid": "32be4be9baf522ff542107a4fd3340f8", "text": "One of the major challenges that cloud providers face is minimizing power consumption of their data centers. To this point, majority of current research focuses on energy efficient management of resources in the Infrastructure as a Service model and through virtual machine consolidation. However, containers are increasingly gaining popularity and going to be major deployment model in cloud environment and specifically in Platform as a Service. This paper focuses on improving the energy efficiency of servers for this new deployment model by proposing a framework that consolidates containers on virtual machines. We first formally present the container consolidation problem and then we compare a number of algorithms and evaluate their performance against metrics such as energy consumption, Service Level Agreement violations, average container migrations rate, and average number of created virtual machines. Our proposed framework and algorithms can be utilized in a private cloud to minimize energy consumption, or alternatively in a public cloud to minimize the total number of hours the virtual machines leased.", "title": "" }, { "docid": "9b6ef205d9697f8ee4958858c0fde651", "text": "Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts. Furthermore, simple combination methods often work reasonably well relative to more complex combinations. This paper provides a review and annotated bibliography of that literature, including contributions from the forecasting, psychology, statistics, and management science literatures. The objectives are to provide a guide to the literature for students and researchers and to help researchers locate contributions in specific areas, both theoretical and applied. Suggestions for future research directions include (1) examination of simple combining approaches to determine reasons for their robustness, (2) development of alternative uses of multiple forecasts in order to make better use of the information they contain, (3) use of combined forecasts as benchmarks for forecast evaluation, and (4) study of subjective combination procedures. Finally, combining forecasts should become part of the mainstream of forecasting practice. In order to achieve this, practitioners should be encouraged to combine forecasts, and software to produce combined forecasts easily should be made available.", "title": "" }, { "docid": "648d6d316e9f9328f528ddc0c365db50", "text": "This paper presents a collaborative partitioning algorithm—a novel ensemblebased approach to coreference resolution. Starting from the all-singleton partition, we search for a solution close to the ensemble’s outputs in terms of a task-specific similarity measure. Our approach assumes a loose integration of individual components of the ensemble and can therefore combine arbitrary coreference resolvers, regardless of their models. Our experiments on the CoNLL dataset show that collaborative partitioning yields results superior to those attained by the individual components, for ensembles of both strong and weak systems. Moreover, by applying the collaborative partitioning algorithm on top of three state-of-the-art resolvers, we obtain the second-best coreference performance reported so far in the literature (MELA v08 score of 64.47).", "title": "" }, { "docid": "f91a9214409df84c4a53c92b2a14bbe3", "text": "OBJECTIVE\nwe performed the first systematic review with meta-analyses of the existing studies that examined mindfulness-based Baduanjin exercise for its therapeutic effects for individuals with musculoskeletal pain or insomnia.\n\n\nMETHODS\nBoth English- (PubMed, Web of Science, Elsevier, and Google Scholar) and Chinese-language (CNKI and Wangfang) electronic databases were used to search relevant articles. We used a modified PEDro scale to evaluate risk of bias across studies selected. All eligible RCTS were considered for meta-analysis. The standardized mean difference was calculated for the pooled effects to determine the magnitude of the Baduanjin intervention effect. For the moderator analysis, we performed subgroup meta-analysis for categorical variables and meta-regression for continuous variables.\n\n\nRESULTS\nThe aggregated result has shown a significant benefit in favour of Baduanjin at alleviating musculoskeletal pain (SMD = -0.88, 95% CI -1.02 to -0.74, p < 0.001, I² = 10.29%) and improving overall sleep quality (SMD = -0.48, 95% CI -0.95 to -0.01, p = 004, I² = 84.42%).\n\n\nCONCLUSIONS\nMindfulness-based Baduanjin exercise may be effective for alleviating musculoskeletal pain and improving overall sleep quality in people with chronic illness. Large, well-designed RCTs are needed to confirm these findings.", "title": "" }, { "docid": "fed9defe1a4705390d72661f96b38519", "text": "Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bézout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.", "title": "" }, { "docid": "e84b6bbb2eaee0edb6ac65d585056448", "text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.", "title": "" }, { "docid": "40b4a9b3a594e2a9cb7d489a3f44c328", "text": "The present article integrates findings from diverse studies on the generalized role of perceived coping self-efficacy in recovery from different types of traumatic experiences. They include natural disasters, technological catastrophes, terrorist attacks, military combat, and sexual and criminal assaults. The various studies apply multiple controls for diverse sets of potential contributors to posttraumatic recovery. In these different multivariate analyses, perceived coping self-efficacy emerges as a focal mediator of posttraumatic recovery. Verification of its independent contribution to posttraumatic recovery across a wide range of traumas lends support to the centrality of the enabling and protective function of belief in one's capability to exercise some measure of control over traumatic adversity.", "title": "" }, { "docid": "c4ea83bc1fbddbf13dbe96175a6aec4c", "text": "Recent work in machine learning and NLP has developed spectral algorithms for many learning tasks involving latent variables. Spectral algorithms rely on singular value decomposition as a basic operation, usually followed by some simple estimation method based on the method of moments. From a theoretical point of view, these methods are appealing in that they offer consistent estimators (and PAC-style guarantees of sample complexity) for several important latent-variable models. This is in contrast to the EM algorithm, which is an extremely successful approach, but which only has guarantees of reaching a local maximum of the likelihood function. From a practical point of view, the methods (unlike EM) have no need for careful initialization, and have recently been shown to be highly efficient (as one example, in work under submission by the authors on learning of latent-variable PCFGs, a spectral algorithm performs at identical accuracy to EM, but is around 20 times faster).", "title": "" }, { "docid": "0792abb24552f04c8b8c7cb71a4357ea", "text": "Deformable part-based models [1, 2] achieve state-of-the-art performance for object detection, but rely on heuristic initialization during training due to the optimization of non-convex cost function. This paper investigates limitations of such an initialization and extends earlier methods using additional supervision. We explore strong supervision in terms of annotated object parts and use it to (i) improve model initialization, (ii) optimize model structure, and (iii) handle partial occlusions. Our method is able to deal with sub-optimal and incomplete annotations of object parts and is shown to benefit from semi-supervised learning setups where part-level annotation is provided for a fraction of positive examples only. Experimental results are reported for the detection of six animal classes in PASCAL VOC 2007 and 2010 datasets. We demonstrate significant improvements in detection performance compared to the LSVM [1] and the Poselet [3] object detectors.", "title": "" }, { "docid": "38a5b1d2e064228ec498cf64d29d80e5", "text": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.", "title": "" }, { "docid": "79c5513abeb58c8735f823258f0bd3e7", "text": "Putting feelings into words (affect labeling) has long been thought to help manage negative emotional experiences; however, the mechanisms by which affect labeling produces this benefit remain largely unknown. Recent neuroimaging studies suggest a possible neurocognitive pathway for this process, but methodological limitations of previous studies have prevented strong inferences from being drawn. A functional magnetic resonance imaging study of affect labeling was conducted to remedy these limitations. The results indicated that affect labeling, relative to other forms of encoding, diminished the response of the amygdala and other limbic regions to negative emotional images. Additionally, affect labeling produced increased activity in a single brain region, right ventrolateral prefrontal cortex (RVLPFC). Finally, RVLPFC and amygdala activity during affect labeling were inversely correlated, a relationship that was mediated by activity in medial prefrontal cortex (MPFC). These results suggest that affect labeling may diminish emotional reactivity along a pathway from RVLPFC to MPFC to the amygdala.", "title": "" }, { "docid": "3580abbef7daf44d743b0175b2eda509", "text": "Cloud-based software applications are designed to change often and rapidly during operations to provide constant quality of service. As a result the boundary between development and operations is becoming increasingly blurred. DevOps provides a set of practices for the integrated consideration of developing and operating software. Software architecture is a central artifact in DevOps practices. Existing architectural models used in the development phase differ from those used in the operation phase in terms of purpose, abstraction, and content. In this chapter, we present the iObserve approach to address these differences and allow for phase-spanning usage of architectural models.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "597c3e1762b0eb8558b72963f25d4b27", "text": "Animals are widespread in nature and the analysis of their shape and motion is important in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. Consequently, we propose a method to capture the detailed 3D shape of animals from images alone. The articulated and deformable nature of animals makes this problem extremely challenging, particularly in unconstrained environments with moving and uncalibrated cameras. To make this possible, we use a strong prior model of articulated animal shape that we fit to the image data. We then deform the animal shape in a canonical reference pose such that it matches image evidence when articulated and projected into multiple images. Our method extracts significantly more 3D shape detail than previous methods and is able to model new species, including the shape of an extinct animal, using only a few video frames. Additionally, the projected 3D shapes are accurate enough to facilitate the extraction of a realistic texture map from multiple frames.", "title": "" }, { "docid": "de48850e635e5a15f8574a0022cbb1e5", "text": "People use various social media for different purposes. The information on an individual site is often incomplete. When sources of complementary information are integrated, a better profile of a user can be built to improve online services such as verifying online information. To integrate these sources of information, it is necessary to identify individuals across social media sites. This paper aims to address the cross-media user identification problem. We introduce a methodology (MOBIUS) for finding a mapping among identities of individuals across social media sites. It consists of three key components: the first component identifies users' unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification. We formally define the cross-media user identification problem and show that MOBIUS is effective in identifying users across social media sites. This study paves the way for analysis and mining across social media sites, and facilitates the creation of novel online services across sites.", "title": "" }, { "docid": "15b26ceb3a81f4af6233ab8a36f66d3f", "text": "The number of web images has been explosively growing due to the development of network and storage technology. These images make up a large amount of current multimedia data and are closely related to our daily life. To efficiently browse, retrieve and organize the web images, numerous approaches have been proposed. Since the semantic concepts of the images can be indicated by label information, automatic image annotation becomes one effective technique for image management tasks. Most existing annotation methods use image features that are often noisy and redundant. Hence, feature selection can be exploited for a more precise and compact representation of the images, thus improving the annotation performance. In this paper, we propose a novel feature selection method and apply it to automatic image annotation. There are two appealing properties of our method. First, it can jointly select the most relevant features from all the data points by using a sparsity-based model. Second, it can uncover the shared subspace of original features, which is beneficial for multi-label learning. To solve the objective function of our method, we propose an efficient iterative algorithm. Extensive experiments are performed on large image databases that are collected from the web. The experimental results together with the theoretical analysis have validated the effectiveness of our method for feature selection, thus demonstrating its feasibility of being applied to web image annotation.", "title": "" }, { "docid": "ebe91d4e3559439af5dd729e7321883d", "text": "Performance of data analytics in Internet of Things (IoTs) depends on effective transport services offered by the underlying network. Fog computing enables independent data-plane computational features at the edge-switches, which serves as a platform for performing certain critical analytics required at the IoT source. To this end, in this paper, we implement a working prototype of Fog computing node based on Software-Defined Networking (SDN). Message Queuing Telemetry Transport (MQTT) is chosen as the candidate IoT protocol that transports data generated from IoT devices (a:k:a: MQTT publishers) to a remote host (called MQTT broker). We implement the MQTT broker functionalities integrated at the edge-switches, that serves as a platform to perform simple message-based analytics at the switches, and also deliver messages in a reliable manner to the end-host for post-delivery analytics. We mathematically validate the improved delivery performance as offered by the proposed switch-embedded brokers.", "title": "" }, { "docid": "820f67fa3521ee4af7da0e022a8d0be3", "text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.", "title": "" }, { "docid": "6681faaf76fe5637f1af7eeb383181c2", "text": "There are many methods for detecting and mitigating software errors but few generic methods for automatically repairing errors once they are discovered. This paper highlights recent work combining program analysis methods with evolutionary computation to automatically repair bugs in off-the-shelf legacy C programs. The method takes as input the buggy C source code, a failed test case that demonstrates the bug, and a small number of other test cases that encode the required functionality of the program. The repair procedure does not rely on formal specifications, making it applicable to a wide range of extant software for which formal specifications rarely exist.", "title": "" } ]
scidocsrr
f530fbbd1d9b73d451b2b1b5dc9282d8
Applying Quantitative Marketing Techniques to the Internet
[ { "docid": "1e18be7d7e121aa899c96cbcf5ea906b", "text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1", "title": "" }, { "docid": "f7562e0540e65fdfdd5738d559b4aad1", "text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING", "title": "" } ]
[ { "docid": "3e62ac4e3476cc2999808f0a43a24507", "text": "We present a detailed description of a new Bioconductor package, phyloseq, for integrated data and analysis of taxonomically-clustered phylogenetic sequencing data in conjunction with related data types. The phyloseq package integrates abundance data, phylogenetic information and covariates so that exploratory transformations, plots, and confirmatory testing and diagnostic plots can be carried out seamlessly. The package is built following the S4 object-oriented framework of the R language so that once the data have been input the user can easily transform, plot and analyze the data. We present some examples that highlight the methods and the ease with which we can leverage existing packages.", "title": "" }, { "docid": "56444dce712e313c0c014a260f97a6b3", "text": "Ecology and historical (phylogeny-based) biogeography have much to offer one another, but exchanges between these fields have been limited. Historical biogeography has become narrowly focused on using phylogenies to discover the history of geological connections among regions. Conversely, ecologists often ignore historical biogeography, even when its input can be crucial. Both historical biogeographers and ecologists have more-or-less abandoned attempts to understand the processes that determine the large-scale distribution of clades. Here, we describe the chasm that has developed between ecology and historical biogeography, some of the important questions that have fallen into it and how it might be bridged. To illustrate the benefits of an integrated approach, we expand on a model that can help explain the latitudinal gradient of species richness.", "title": "" }, { "docid": "d13ddbafa8f0774aec3bf0f491b89c0c", "text": "Dust explosions always claim lives and cause huge financial losses. Dust explosion risk can be prevented by inherently safer design or mitigated by engineering protective system. Design of explosion prevention and protection needs comprehensive knowledge and data on the process, workshop, equipment, and combustible materials. The knowledge includes standards, expertise of experts, and practical experience. The database includes accidents, dust explosion characteristics, inherently safer design methods, and protective design methods. Integration of such a comprehensive knowledge system is very helpful. The developed system has the following functions: risk assessment, accident analysis, recommendation of prevention and protection solution, and computer aided design of explosion protection. The software was based on Browser/Server architecture and was developed using mixed programming of ASP.Net and Prolog. The developed expert system can be an assistant to explosion design engineers and safety engineers of combustible dust handling plants.", "title": "" }, { "docid": "7b7c418cefcd571b03e5c0a002a5e923", "text": "A loop antenna having a gap has been investigated in the presence of a ground plane. The antenna configuration is optimized for the CP radiation, using the method of moments. It is found that, as the loop height above the ground plane is reduced, the optimized gap width approaches zero. Further antenna height reduction is found to be possible for an antenna whose wire radius is increased. On the basis of these results, we design an open-loop array antenna using a microstrip comb line as the feed network. It is demonstrated that an array antenna composed of eight open loop elements can radiate a CP wave with an axial ratio of 0.1 dB. The bandwidth for a 3-dB axial-ratio criterion is 4%, where the gain is almost constant at 15 dBi.", "title": "" }, { "docid": "d2c021f8d8eecfab43af79585823f407", "text": "Swallowing and feeding disorder (dysphagia) have high incidence and prevalence in children and adults with developmental disability. Standardized screening and clinical assessments are needed to identify and describe the disorder. The aim of this study was to describe the psychometric properties of the Dysphagia Disorder Survey (DDS), a screening and clinical assessment of swallowing and feeding function for eating and drinking developed specifically for this population. The statistical analysis was performed on a sample of 654 individuals (age range 8-82) with intellectual and developmental disability living in two residential settings in the United States that served somewhat different populations. The two samples had similar factor structures. Internal consistency of the DDS and subscales was confirmed using Chronbach's coefficient alpha. The DDS demonstrated convergent validity when compared to judgments of swallowing and feeding disorder severity made by clinical swallowing specialists. Discriminative validity for severity of disorder was tested by comparing the two samples. The results of the study suggest that the DDS is a reliable and valid test for identifying and describing swallowing and feeding disorder in children and adults with developmental disability.", "title": "" }, { "docid": "78d7c61f7ca169a05e9ae1393712cd69", "text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.", "title": "" }, { "docid": "c53f8e3d8ca800284ce22748d7afde59", "text": "With the expansion of software scale, effective approaches for automatic vulnerability mining have been in badly needed. This paper presents a novel approach which can generate test cases of high pertinence and reachability. Unlike standard fuzzing techniques which explore the test space blindly, our approach utilizes abstract interpretation based on intervals to locate the Frail-Points of program which may cause buffer over-flow in some special conditions and the technique of static taint trace to build mappings between the Frail-Points and program inputs. Moreover, acquire path constraints of each Frail-Point through symbolic execution. Finally, combine information of mappings and path constraints to propose a policy for guiding test case generation.", "title": "" }, { "docid": "48f06ed96714c2970550fef88d21d517", "text": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?", "title": "" }, { "docid": "1f1158ad55dc8a494d9350c5a5aab2f2", "text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).", "title": "" }, { "docid": "231f3a7d6ee769432c37b87df6f45c15", "text": "Common variable immunodeficiency (CVID) is the most common severe adult primary immunodeficiency and is characterized by a failure to produce antibodies leading to recurrent predominantly sinopulmonary infections. Improvements in the prevention and treatment of infection with immunoglobulin replacement and antibiotics have resulted in malignancy, autoimmune, inflammatory and lymphoproliferative disorders emerging as major clinical challenges in the management of patients who have CVID. In a proportion of CVID patients, inflammation manifests as granulomas that frequently involve the lungs, lymph nodes, spleen and liver and may affect almost any organ. Granulomatous lymphocytic interstitial lung disease (GLILD) is associated with a worse outcome. Its underlying pathogenic mechanisms are poorly understood and there is limited evidence to inform how best to monitor, treat or select patients to treat. We describe the use of combined 2-[(18)F]-fluoro-2-deoxy-d-glucose positron emission tomography and computed tomography (FDG PET-CT) scanning for the assessment and monitoring of response to treatment in a patient with GLILD. This enabled a synergistic combination of functional and anatomical imaging in GLILD and demonstrated a widespread and high level of metabolic activity in the lungs and lymph nodes. Following treatment with rituximab and mycophenolate there was almost complete resolution of the previously identified high metabolic activity alongside significant normalization in lymph node size and lung architecture. The results support the view that GLILD represents one facet of a multi-systemic metabolically highly active lymphoproliferative disorder and suggests potential utility of this imaging modality in this subset of patients with CVID.", "title": "" }, { "docid": "8014c32fa820e1e2c54e1004b62dc33e", "text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "1a3b49298f6217cc8600e00886751f7f", "text": "A person's language use reveals much about the person's social identity, which is based on the social categories a person belongs to including age and gender. We discuss the development of TweetGenie, a computer program that predicts the age of Twitter users based on their language use. We explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. An automatic system achieves better performance than humans on these tasks. Both humans and the automatic systems tend to underpredict the age of older people. We find that most linguistic changes occur when people are young, and that after around 30 years the studied variables show little change, making it difficult to predict the ages of older Twitter users.", "title": "" }, { "docid": "b5feea2a9ef2ed18182964acd83cdaee", "text": "We consider the problem of learning general-purpose, paraphrastic sentence embeddings, revisiting the setting of Wieting et al. (2016b). While they found LSTM recurrent networks to underperform word averaging, we present several developments that together produce the opposite conclusion. These include training on sentence pairs rather than phrase pairs, averaging states to represent sequences, and regularizing aggressively. These improve LSTMs in both transfer learning and supervised settings. We also introduce a new recurrent architecture, the GATED RECURRENT AVERAGING NETWORK, that is inspired by averaging and LSTMs while outperforming them both. We analyze our learned models, finding evidence of preferences for particular parts of speech and dependency relations. 1", "title": "" }, { "docid": "8ead349d8495e3927df3f46a43b67ea4", "text": "146 women and 44 men (out- and inpatients; treatment sample) with Seasonal Affective Disorder (SAD; winter type) were tested for gender differences in demographic, clinical and seasonal characteristics. Sex ratio in prevalence was (women : men) 3.6 : 1 in unipolar depressives and 2.4 : 1 in bipolars (I and II). Sex ratios varied also between different birth cohorts and men seemed to underreport symptoms. There was no significant difference in symptom-profiles in both genders, however a preponderance of increased eating and different food selection on a trend level occured in women. The female group suffered significantly more often from thyroid disorders and from greater mood variations because of dark and cloudy weather. Women referred themselves to our clinic significantly more frequently as compared to men. In summary gender differences in SAD were similar to those of non-seasonal depression: the extent of gender differences in the prevalence of affective disorders appears to depend on case criteria such as diagnosis (unipolar vs. bipolar), birth cohort and number of symptoms as minimum threshold for diagnosis. We support the idea of applying sex-specific diagnostic criteria for diagnosing depression on the basis of our data and of the literature.", "title": "" }, { "docid": "40479536efec6311cd735f2bd34605d7", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "ae534b0d19b95dcee87f06ed279fc716", "text": "In this paper, comparative study of p type and n type solar cells are described using two popular solar cell analyzing software AFORS HET and PC1D. We use SiNx layer as Antireflection Coating and a passivated layer Al2O3 .The variation of reflection, absorption, I-V characteristics, and internal and external quantum efficiency have been done by changing the thickness of passivated layer and ARC layer, and front and back surface recombination velocities. The same analysis is taken by imposing surface charge at front of n-type solar Cell and we get 20.13%-20.15% conversion efficiency.", "title": "" }, { "docid": "f282a0e666a2b2f3f323870fc07217bd", "text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.", "title": "" }, { "docid": "3d8df2c8fcbdc994007104b8d21d7a06", "text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.", "title": "" }, { "docid": "a7336b4e1ba0846f45f6757b121a7d33", "text": "Recently, concerns have been raised that residues of glyphosate-based herbicides may interfere with the homeostasis of the intestinal bacterial community and thereby affect the health of humans or animals. The biochemical pathway for aromatic amino acid synthesis (Shikimate pathway), which is specifically inhibited by glyphosate, is shared by plants and numerous bacterial species. Several in vitro studies have shown that various groups of intestinal bacteria may be differently affected by glyphosate. Here, we present results from an animal exposure trial combining deep 16S rRNA gene sequencing of the bacterial community with liquid chromatography mass spectrometry (LC-MS) based metabolic profiling of aromatic amino acids and their downstream metabolites. We found that glyphosate as well as the commercial formulation Glyfonova®450 PLUS administered at up to fifty times the established European Acceptable Daily Intake (ADI = 0.5 mg/kg body weight) had very limited effects on bacterial community composition in Sprague Dawley rats during a two-week exposure trial. The effect of glyphosate on prototrophic bacterial growth was highly dependent on the availability of aromatic amino acids, suggesting that the observed limited effect on bacterial composition was due to the presence of sufficient amounts of aromatic amino acids in the intestinal environment. A strong correlation was observed between intestinal concentrations of glyphosate and intestinal pH, which may partly be explained by an observed reduction in acetic acid produced by the gut bacteria. We conclude that sufficient intestinal levels of aromatic amino acids provided by the diet alleviates the need for bacterial synthesis of aromatic amino acids and thus prevents an antimicrobial effect of glyphosate in vivo. It is however possible that the situation is different in cases of human malnutrition or in production animals.", "title": "" } ]
scidocsrr
97408e2d73587953c359b48c92a4182b
Determining employee awareness using the Human Aspects of Information Security Questionnaire (HAIS-Q)
[ { "docid": "bf7335b263742fee9ca2c943e5533d1e", "text": "Smartphone users increasingly download and install third-party applications from official application repositories. Attackers may use this centralized application delivery architecture as a security and privacy attack vector. This risk increases since application vetting mechanisms are often not in place and the user is delegated to authorize which functionality and protected resources are accessible by third-party applications. In this paper, we mount a survey to explore the security awareness of smartphone users who download applications from official application repositories (e.g. Google Play, Apple’s App Store, etc.). The survey findings suggest a security complacency, as the majority of users trust the app repository, security controls are not enabled or not added, and users disregard security during application selection and installation. As a response to this security complacency, we built a prediction model to indentify users who trust the app repository. Then, the model is assessed, evaluated, and proved to be statistically significant and efficient.", "title": "" }, { "docid": "5f007e018f9abc74d1d7d188cd077fe7", "text": "Due to the intensified need for improved information security, many organisations have established information security awareness programs to ensure that their employees are informed and aware of security risks, thereby protecting themselves and their profitability. In order for a security awareness program to add value to an organisation and at the same time make a contribution to the field of information security, it is necessary to have a set of methods to study and measure its effect. The objective of this paper is to report on the development of a prototype model for measuring information security awareness in an international mining company. Following a description of the model, a brief discussion of the application results is presented. a 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "ead343ffee692a8645420c58016c129d", "text": "One of the most important applications in multiview imaging (MVI) is the development of advanced immersive viewing or visualization systems using, for instance, 3DTV. With the introduction of multiview TVs, it is expected that a new age of 3DTV systems will arrive in the near future. Image-based rendering (IBR) refers to a collection of techniques and representations that allow 3-D scenes and objects to be visualized in a realistic way without full 3-D model reconstruction. IBR uses images as the primary substrate. The potential for photorealistic visualization has tremendous appeal, and it has been receiving increasing attention over the years. Applications such as video games, virtual travel, and E-commerce stand to benefit from this technology. This article serves as a tutorial introduction and brief review of this important technology. First the classification, principles, and key research issues of IBR are discussed. Then, an object-based IBR system to illustrate the techniques involved and its potential application in view synthesis and processing are explained. Stereo matching, which is an important technique for depth estimation and view synthesis, is briefly explained and some of the top-ranked methods are highlighted. Finally, the challenging problem of interactive IBR is explained. Possible solutions and some state-of-the-art systems are also reviewed.", "title": "" }, { "docid": "69a11f89a92051631e1c07f2af475843", "text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.", "title": "" }, { "docid": "2a4f8fdee23dfb009b61899d5773206f", "text": "We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2Dsupervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn. P. Henderson School of Informatics, University of Edinburgh, Scotland E-mail: [email protected] V. Ferrari Google Research, Zürich, Switzerland E-mail: [email protected]", "title": "" }, { "docid": "5d624fadc5502ef0b65c227d4dd47a9a", "text": "In this work, highly selective filters based on periodic arrays of electrically small resonators are pointed out. The high-pass filters are implemented in microstrip technology by etching complementary split ring resonators (CSRRs), or complementary spiral resonators (CSRs), in the ground plane, and series capacitive gaps, or interdigital capacitors, in the signal strip. The structure exhibits a composite right/left handed (CRLH) behavior and, by properly tuning the geometry of the elements, a high pass response with a sharp transition band is obtained. The low-pass filters, also implemented in microstrip technology, are designed by cascading open complementary split ring resonators (OCSRRs) in the signal strip. These low pass filters do also exhibit a narrow transition band. The high selectivity of these microwave filters is due to the presence of a transmission zero. Since the resonant elements are small, filter dimensions are compact. Several prototype device examples are reported in this paper.", "title": "" }, { "docid": "725e826f13a17fe73369e85733431e32", "text": "This study aims to explore the determinants influencing usage intention in mobile social media from the user motivation and the Theory of Planned Behavior (TPB) perspectives. Based on TPB, this study added three motivations, namely entertainment, sociality, and information, into the TPB model, and further examined the moderating effect of posters and lurkers in the relationships of the proposed model. A structural equation modeling was used and 468 LINE users in Taiwan were investigated. The results revealed that entertainment, sociality, and information are positively associated with behavioral attitude. Moreover, behavioral attitude, subjective norms, and perceived behavioral control are positively associated with usage intention. Furthermore, posters likely post messages on the LINE because of entertainment, sociality, and information, but they are not significantly subject to subjective norms. In contrast, lurkers tend to read, not write messages on the LINE because of entertainment and information rather than sociality and perceived behavioral control.", "title": "" }, { "docid": "907b84bfd2160c7396c862e23cb91018", "text": "In this paper, we propose an efficient and fast object detector which can process hundreds of frames per second. To achieve this goal we investigate three main aspects of the object detection framework: network architecture, loss function and training data (labeled and unlabeled). In order to obtain compact network architecture, we introduce various improvements, based on recent work, to develop an architecture which is computationally light-weight and achieves a reasonable performance. To further improve the performance, while keeping the complexity same, we utilize distillation loss function. Using distillation loss we transfer the knowledge of a more accurate teacher network to proposed light-weight student network. We propose various innovations to make distillation efficient for the proposed one stage detector pipeline: objectness scaled distillation loss, feature map non-maximal suppression and a single unified distillation loss function for detection. Finally, building upon the distillation loss, we explore how much can we push the performance by utilizing the unlabeled data. We train our model with unlabeled data using the soft labels of the teacher network. Our final network consists of 10x fewer parameters than the VGG based object detection network and it achieves a speed of more than 200 FPS and proposed changes improve the detection accuracy by 14 mAP over the baseline on Pascal dataset.", "title": "" }, { "docid": "652e544ec32f5fde48d2435de81f5351", "text": "As many as 50% of spontaneous preterm births are infection-associated. Intrauterine infection leads to a maternal and fetal inflammatory cascade, which produces uterine contractions and may also result in long-term adverse outcomes, such as cerebral palsy. This article addresses the prevalence, microbiology, and management of intrauterine infection in the setting of preterm labor with intact membranes. It also outlines antepartum treatment of infections for the purpose of preventing preterm birth.", "title": "" }, { "docid": "b236a4332d64f416a92937074e32aac1", "text": "Levothyroxine (T4) is a narrow therapeutic index drug with classic bioequivalence problem between various available products. Dissolution of a drug is a crucial step in its oral absorption and bioavailability. The dissolution of T4 from three commercial solid oral dosage forms: Synthroid (SYN), generic levothyroxine sodium by Sandoz Inc. (GEN) and Tirosint (TIR) was studied using a sensitive ICP-MS assay. All the three products showed variable and pH-dependent dissolution behaviors. The absence of surfactant from the dissolution media decreased the percent T4 dissolved for all the three products by 26-95% (at 30 min). SYN dissolution showed the most pH dependency, whereas GEN and TIR showed the fastest and highest dissolution, respectively. TIR was the most consistent one, and was minimally affected by pH and/or by the presence of surfactant. Furthermore, dissolution of T4 decreased considerably with increase in the pH, which suggests a possible physical interaction in patients concurrently on T4 and gastric pH altering drugs, such as proton pump inhibitors. Variable dissolution of T4 products can, therefore, impact the oral absorption and bioavailability of T4 and may result in bioequivalence problems between various available products.", "title": "" }, { "docid": "2b458194a41ad1563007f44e59e026f2", "text": "A large body of work has been devoted to identifying community structure in networks. A community is often though of as a set of nodes that has more connections between its members than to the remainder of the network. In this paper, we characterize as a function of size the statistical and structural properties of such sets of nodes. We define the network community profile plot, which characterizes the \"best\" possible community - according to the conductance measure - over a wide range of size scales, and we study over 70 large sparse real-world networks taken from a wide range of application domains. Our results suggest a significantly more refined picture of community structure in large real-world networks than has been appreciated previously.\n Our most striking finding is that in nearly every network dataset we examined, we observe tight but almost trivial communities at very small scales, and at larger size scales, the best possible communities gradually \"blend in\" with the rest of the network and thus become less \"community-like.\" This behavior is not explained, even at a qualitative level, by any of the commonly-used network generation models. Moreover, this behavior is exactly the opposite of what one would expect based on experience with and intuition from expander graphs, from graphs that are well-embeddable in a low-dimensional structure, and from small social networks that have served as testbeds of community detection algorithms. We have found, however, that a generative model, in which new edges are added via an iterative \"forest fire\" burning process, is able to produce graphs exhibiting a network community structure similar to our observations.", "title": "" }, { "docid": "7340823ae6afd072ab186ec8aaad0d44", "text": "Blood flow measurement using Doppler ultrasound has become a useful tool for diagnosing cardiovascular diseases and as a physiological monitor. Recently, pocket-sized ultrasound scanners have been introduced for portable diagnosis. The present paper reports the implementation of a portable ultrasound pulsed-wave (PW) Doppler flowmeter using a smartphone. A 10-MHz ultrasonic surface transducer was designed for the dynamic monitoring of blood flow velocity. The directional baseband Doppler shift signals were obtained using a portable analog circuit system. After hardware processing, the Doppler signals were fed directly to a smartphone for Doppler spectrogram analysis and display in real time. To the best of our knowledge, this is the first report of the use of this system for medical ultrasound Doppler signal processing. A Couette flow phantom, consisting of two parallel disks with a 2-mm gap, was used to evaluate and calibrate the device. Doppler spectrograms of porcine blood flow were measured using this stand-alone portable device under the pulsatile condition. Subsequently, in vivo portable system verification was performed by measuring the arterial blood flow of a rat and comparing the results with the measurement from a commercial ultrasound duplex scanner. All of the results demonstrated the potential for using a smartphone as a novel embedded system for portable medical ultrasound applications.", "title": "" }, { "docid": "b5c8263dd499088ded04c589b5da1d9f", "text": "User interfaces and information systems have become increasingly social in recent years, aimed at supporting the decentralized, cooperative production and use of content. A theory that predicts the impact of interface and interaction designs on such factors as participation rates and knowledge discovery is likely to be useful. This paper reviews a variety of observed phenomena in social information foraging and sketches a framework extending Information Foraging Theory towards making predictions about the effects of diversity, interference, and cost-of-effort on performance time, participation rates, and utility of discoveries.", "title": "" }, { "docid": "85657981b55e3a87e74238cd373b3db6", "text": "INTRODUCTION\nLung cancer mortality rates remain at unacceptably high levels. Although mitochondrial dysfunction is a characteristic of most tumor types, mitochondrial dynamics are often overlooked. Altered rates of mitochondrial fission and fusion are observed in lung cancer and can influence metabolic function, proliferation and cell survival.\n\n\nAREAS COVERED\nIn this review, the authors outline the mechanisms of mitochondrial fission and fusion. They also identify key regulatory proteins and highlight the roles of fission and fusion in metabolism and other cellular functions (e.g., proliferation, apoptosis) with an emphasis on lung cancer and the interaction with known cancer biomarkers. They also examine the current therapeutic strategies reported as altering mitochondrial dynamics and review emerging mitochondria-targeted therapies.\n\n\nEXPERT OPINION\nMitochondrial dynamics are an attractive target for therapeutic intervention in lung cancer. Mitochondrial dysfunction, despite its molecular heterogeneity, is a common abnormality of lung cancer. Targeting mitochondrial dynamics can alter mitochondrial metabolism, and many current therapies already non-specifically affect mitochondrial dynamics. A better understanding of mitochondrial dynamics and their interaction with currently identified cancer 'drivers' such as Kirsten-Rat Sarcoma Viral Oncogene homolog will lead to the development of novel therapeutics.", "title": "" }, { "docid": "5d15118fcb25368fc662deeb80d4ef28", "text": "A5-GMR-1 is a synchronous stream cipher used to provide confidentiality for communications between satellite phones and satellites. The keystream generator may be considered as a finite state machine, with an internal state of 81 bits. The design is based on four linear feedback shift registers, three of which are irregularly clocked. The keystream generator takes a 64-bit secret key and 19-bit frame number as inputs, and produces an output keystream of length berween 28 and 210 bits.\n Analysis of the initialisation process for the keystream generator reveals serious flaws which significantly reduce the number of distinct keystreams that the generator can produce. Multiple (key, frame number) pairs produce the same keystream, and the relationship between the various pairs is easy to determine. Additionally, many of the keystream sequences produced are phase shifted versions of each other, for very small phase shifts. These features increase the effectiveness of generic time-memory tradeoff attacks on the cipher, making such attacks feasible.", "title": "" }, { "docid": "dfba47fd3b84d6346052b559568a0c21", "text": "Understanding gaming motivations is important given the growing trend of incorporating game-based mechanisms in non-gaming applications. In this paper, we describe the development and validation of an online gaming motivations scale based on a 3-factor model. Data from 2,071 US participants and 645 Hong Kong and Taiwan participants is used to provide a cross-cultural validation of the developed scale. Analysis of actual in-game behavioral metrics is also provided to demonstrate predictive validity of the scale.", "title": "" }, { "docid": "dc74298bb7bc5fdeff7bd28fe2ec1fe0", "text": "In this paper, we propose a novel technique for video summarization based on the Singular Value Decomposition (SVD). For the input video sequence, we create a feature-frame matrix A, and perform the SVD on it. From this SVD, we are able to not only derive the reened feature space to better cluster visually similar frames, but also deene a metric to measure the amount of visual content contained in each frame cluster using its degree of visual changes. Then, in the reened feature space, we nd the most static frame cluster, deene it as the content unit, and use the content value computed from it as the threshold to cluster the rest of the frames. Based on this clustering result, either the optimal set of keyframes, or a summarized motion video with the user speciied time length can be generated to support diierent user requirements for video browsing and content overview. Our approach ensures that the summarized video representation contains little redundancy , and gives equal attention to the same amount of contents.", "title": "" }, { "docid": "c05bf2dedcb7837f877c7a3e257f4222", "text": "In this letter, we propose a tunable patch antenna made of a slotted rectangular patch loaded by a number of posts close to the patch edge. The posts are short circuited to the ground plane via a set of PIN diode switches. Simulations and measurements verify the possibility of tuning the antenna in subbands from 620 to 1150 MHz. Good matching has been achieved over most of the bands. Other performed designs show that more than one octave can be achieved using the proposed structure.", "title": "" }, { "docid": "b79110b1145fc8a35f20efdf0029fbac", "text": "In this paper, a new bridgeless single-phase AC-DC converter with an automatic power factor correction (PFC) is proposed. The proposed rectifier is based on the single-ended primary inductance converter (SEPIC) topology and it utilizes a bidirectional switch and two fast diodes. The absence of an input diode bridge and the presence of only one diode in the flowing-current path during each switching cycle result in less conduction loss and improved thermal management compared to existing PFC rectifiers. Other advantages include simple control circuitry, reduced switch voltage stress, and low electromagnetic-interference noise. Performance comparison between the proposed and the conventional SEPIC PFC rectifier is performed. Simulation and experimental results are presented to demonstrate the feasibility of the proposed technique.", "title": "" }, { "docid": "9a7e6d0b253de434e62eb6998ff05f47", "text": "Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.", "title": "" } ]
scidocsrr
e6355c2c015d6db25641e5a53b329892
Achieving "Massive MIMO" Spectral Efficiency with a Not-so-Large Number of Antennas
[ { "docid": "a2faba3e69563acf9e874bf4c4327b5d", "text": "We analyze a mobile wireless link comprising M transmitter andN receiver antennas operating in a Rayleigh flat-fading environment. The propagation coef fici nts between every pair of transmitter and receiver antennas are statistically independent and un known; they remain constant for a coherence interval ofT symbol periods, after which they change to new independent v alues which they maintain for anotherT symbol periods, and so on. Computing the link capacity, associated with channel codin g over multiple fading intervals, requires an optimization over the joint density of T M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater t han the length of the coherence interval: the capacity forM > T is equal to the capacity for M = T . Capacity is achieved when the T M transmitted signal matrix is equal to the product of two stat i ically independent matrices: a T T isotropically distributed unitary matrix times a certain T M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity f or many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence i nterval increases, the capacity approaches the capacity obtained as if the receiver knew the propagatio n coefficients. Index Terms —Multi-element antenna arrays, wireless communications, space-time modulation", "title": "" } ]
[ { "docid": "d0043eb45257f9eed6d874f4c7aa709c", "text": "We report the results of our classification-based machine translation model, built upon the framework of a recurrent neural network using gated recurrent units. Unlike other RNN models that attempt to maximize the overall conditional log probability of sentences against sentences, our model focuses a classification approach of estimating the conditional probability of the next word given the input sequence. This simpler approach using GRUs was hoped to be comparable with more complicated RNN models, but achievements in this implementation were modest and there remains a lot of room for improving this classification approach.", "title": "" }, { "docid": "2858b796264102abf10fcf6507639883", "text": "Privacy policies are a nearly ubiquitous feature of websites and online services, and the contents of such policies are legally binding for users. However, the obtuse language and sheer length of most privacy policies tend to discourage users from reading them. We describe a pilot experiment to use automatic text categorization to answer simple categorical questions about privacy policies, as a first step toward developing automated or semi-automated methods to retrieve salient features from these policies. Our results tentatively demonstrate the feasibility of this approach for answering selected questions about privacy policies, suggesting that further work toward user-oriented analysis of these policies could be fruitful.", "title": "" }, { "docid": "4ac69ffb880cea60dac3b24b55c9c083", "text": "Patterns of Intelligent and Mobile Agents Elizabeth A.Kendall, P.V. Murali Krishna, Chirag V. Pathak, C:B. Suresh Computer Systems Engineering, Royal Melbourne Institute Of Technology City Campus, GPO Box 2476V, Melbourne, VIC 3001 AUSTRALIA email : [email protected] 1. ABSTRACT Agent systems must have foundation; one approach that successfully applied to other so&are is patterns. This paper collection of patterns for agents. 2. MOTIVATION Almost all agent development to date has a strong has been kinds of presents a", "title": "" }, { "docid": "4305c2709ca0b976bd70a50aa6320612", "text": "One of the hallmarks of a co-located agile team is the simple and open flow of information between its members. In a co-located setting, peripheral awareness, osmotic communication and simple information radiators support agile principles such as collective ownership, minimal documentation and simple design, and facilitate smooth collaboration. However in a dispersed agile team, where individual team members are distributed across several sites, these mechanisms are not available and information sharing has to be more explicit. Research into distributed software development has been tackling similar issues, but little work has been reported into dispersed agile teams. This paper reports on a field study of one successful partially dispersed agile team. Using a distributed cognition analysis which focuses on information propagation and transformation within the team we investigate how the team collaborates and compare our findings with those from co-located teams.", "title": "" }, { "docid": "4bf9ec9d1600da4eaffe2bfcc73ee99f", "text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.", "title": "" }, { "docid": "f9b60eaec9320b61db6edae9baeacbe2", "text": "The latest two international educational assessments found global prevalence of sleep deprivation in students, consistent with what has been reported in sleep research. However, despite the fundamental role of adequate sleep in cognitive and social functioning, this important issue has been largely overlooked by educational researchers. Drawing upon evidence from sleep research, literature on the heavy media use by children and adolescents, and data from web analytics on youth-oriented game sites and mobile analytics on youth-oriented game apps, we argue that heavy media use, particularly digital game play, may be an important contributor to sleep deprivation in students. Therefore, educational researchers, policy makers, teachers, and parents should pay greater attention to student sleep and develop programs and interventions to improve both quality and quantity of student sleep.", "title": "" }, { "docid": "46d3cec76fc52fb7141fc6d999931d6e", "text": "Numerous studies suggest that infants delivered by cesarean section are at a greater risk of non-communicable diseases than their vaginal counterparts. In particular, epidemiological studies have linked Cesarean delivery with increased rates of asthma, allergies, autoimmune disorders, and obesity. Mode of delivery has also been associated with differences in the infant microbiome. It has been suggested that these differences are attributable to the \"bacterial baptism\" of vaginal birth, which is bypassed in cesarean deliveries, and that the abnormal establishment of the early-life microbiome is the mediator of later-life adverse outcomes observed in cesarean delivered infants. This has led to the increasingly popular practice of \"vaginal seeding\": the iatrogenic transfer of vaginal microbiota to the neonate to promote establishment of a \"normal\" infant microbiome. In this review, we summarize and critically appraise the current evidence for a causal association between Cesarean delivery and neonatal dysbiosis. We suggest that, while Cesarean delivery is certainly associated with alterations in the infant microbiome, the lack of exposure to vaginal microbiota is unlikely to be a major contributing factor. Instead, it is likely that indication for Cesarean delivery, intrapartum antibiotic administration, absence of labor, differences in breastfeeding behaviors, maternal obesity, and gestational age are major drivers of the Cesarean delivery microbial phenotype. We, therefore, call into question the rationale for \"vaginal seeding\" and support calls for the halting of this practice until robust evidence of need, efficacy, and safety is available.", "title": "" }, { "docid": "e852aeee2d0b639fb6d21af02092d2cd", "text": "OBJECTIVE\nPsoriasis is a chronic immune-mediated disorder of the skin. The disease manifests itself with red or silvery scaly plaques distributing over the lower back, scalp, and extensor aspects of limbs. Several medications are available for the treatment of psoriasis; however, high rates of remission and side-effects still persist as a major concern. Siddha, one of the traditional systems of Indian medicine offers cure to many dermatological conditions, including psoriasis. The oil prepared from the leaves of Wrightia tinctoria is prescribed by many healers for the treatment of psoriasis. This work aims to decipher the mechanism of action of the W. tinctoria in curing psoriasis and its associated comorbidities.\n\n\nDESIGN\nThe work integrates various pharmacology approaches such as drug-likeness evaluation, oral bioavailability predictions, and network pharmacology approaches to understand the roles of various bioactive components of the herb.\n\n\nRESULTS\nThis work identified 67 compounds of W. tinctoria interacting with 238 protein targets. The compounds were found to act through synergistic mechanism in reviving the disrupted process in the diseased state.\n\n\nCONCLUSION\nThe results of this work not only shed light on the pharmacological action of the herb but also validate the usage of safe herbal drugs.", "title": "" }, { "docid": "fe30cb6b1643be8362c16743e0c7f70b", "text": "The peripheral nervous and immune systems are traditionally thought of as serving separate functions. The line between them is, however, becoming increasingly blurred by new insights into neurogenic inflammation. Nociceptor neurons possess many of the same molecular recognition pathways for danger as immune cells, and, in response to danger, the peripheral nervous system directly communicates with the immune system, forming an integrated protective mechanism. The dense innervation network of sensory and autonomic fibers in peripheral tissues and high speed of neural transduction allows rapid local and systemic neurogenic modulation of immunity. Peripheral neurons also seem to contribute to immune dysfunction in autoimmune and allergic diseases. Therefore, understanding the coordinated interaction of peripheral neurons with immune cells may advance therapeutic approaches to increase host defense and suppress immunopathology.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "ca744faaebd2f9709cdbb5c4ba80ac56", "text": "We explore the relationship between time and relevance using TREC ad-hoc queries. A type of query is identified that favors very recent documents. We propose a time-based language model approach to retrieval for these queries. We show how time can be incorporated into both query-likelihood models and relevance models. These models were used for experiments comparing time-based language models to heuristic techniques for incorporating document recency in the ranking. Our results show that time-based models perform as well as or better than the best of the heuristic techniques.", "title": "" }, { "docid": "ae497143f2c1b15623ab35b360d954e5", "text": "With the popularity of social media (e.g., Facebook and Flicker), users could easily share their check-in records and photos during their trips. In view of the huge amount of check-in data and photos in social media, we intend to discover travel experiences to facilitate trip planning. Prior works have been elaborated on mining and ranking existing travel routes from check-in data. We observe that when planning a trip, users may have some keywords about preference on his/her trips. Moreover, a diverse set of travel routes is needed. To provide a diverse set of travel routes, we claim that more features of Places of Interests (POIs) should be extracted. Therefore, in this paper, we propose a Keyword-aware Skyline Travel Route (KSTR) framework that use knowledge extraction from historical mobility records and the user's social interactions. Explicitly, we model the \"Where, When, Who\" issues by featurizing the geographical mobility pattern, temporal influence and social influence. Then we propose a keyword extraction module to classify the POI-related tags automatically into different types, for effective matching with query keywords. We further design a route reconstruction algorithm to construct route candidates that fulfill the query inputs. To provide diverse query results, we explore Skyline concepts to rank routes. To evaluate the effectiveness and efficiency of the proposed algorithms, we have conducted extensive experiments on real location-based social network datasets, and the experimental results show that KSTR does indeed demonstrate good performance compared to state-of-the-art works.", "title": "" }, { "docid": "3072b7d80b0e9afffe6489996eca19aa", "text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.", "title": "" }, { "docid": "12f4242c16c1d73fded4cb32ccc938ea", "text": "Cloud Computing is a form of distributed computing wherein resources and application platforms are distributed over the Internet through on demand and pay on utilization basis. Data Storage is main feature that cloud data centres are provided to the companies/organizations to preserve huge data. But still few organizations are not ready to use cloud technology due to lack of security. This paper describes the different techniques along with few security challenges, advantages and also disadvantages. It also provides the analysis of data security issues and privacy protection affairs related to cloud computing by preventing data access from unauthorized users, managing sensitive data, providing accuracy and consistency of data stored.", "title": "" }, { "docid": "1c2dae29ed066eec72e72c1173bd263d", "text": "Wireless Sensor Networks (WSNs) are important and necessary platforms for the future as the concept \"Internet of Things\" has emerged lately. They are used for monitoring, tracking, or controlling of many applications in industry, health care, habitat, and military. However, the quality of data collected by sensor nodes is affected by anomalies that occur due to various reasons, such as node failures, reading errors, unusual events, and malicious attacks. Therefore, anomaly detection is a necessary process to ensure the quality of sensor data before it is utilized for making decisions. In this review, we present the challenges of anomaly detection in WSNs and state the requirements to design efficient and effective anomaly detection models. We then review the latest advancements of data anomaly detection research in WSNs and classify current detection approaches in five main classes based on the detection methods used to design these approaches. Varieties of the state-of-the-art models for each class are covered and their limitations are highlighted to provide ideas for potential future works. Furthermore, the reviewed approaches are compared and evaluated based on how well they meet the stated requirements. Finally, the general limitations of current approaches are mentioned and further research opportunities are suggested and discussed.", "title": "" }, { "docid": "e777ccaaeade3c4fe66c2bd23dec920b", "text": "Text classiication is becoming more important with the proliferation of the Internet and the huge amount of data it transfers. We present an eecient algorithm for text classiication using hierarchical classiiers based on a concept hierarchy. The simple TFIDF classiier is chosen to train sample data and to classify other new data. Despite its simplicity, results of experiments on Web pages and TV closed captions demonstrate high classiication accuracy. Application of feature subset selection techniques improves the performance. Our algorithm is compu-tationally eecient being bounded by O(n log n) for n samples.", "title": "" }, { "docid": "79e6d47a27d8271ae0eaa0526df241a7", "text": "A DC-DC buck converter capable of handling loads from 20 μA to 100 mA and operating off a 2.8-4.2 V battery is implemented in a 45 nm CMOS process. In order to handle high battery voltages in this deeply scaled technology, multiple transistors are stacked in the power train. Switched-Capacitor DC-DC converters are used for internal rail generation for stacking and supplies for control circuits. An I-C DAC pulse width modulator with sleep mode control is proposed which is both area and power-efficient as compared with previously published pulse width modulator schemes. Both pulse frequency modulation (PFM) and pulse width modulation (PWM) modes of control are employed for the wide load range. The converter achieves a peak efficiency of 75% at 20 μA, 87.4% at 12 mA in PFM, and 87.2% at 53 mA in PWM.", "title": "" }, { "docid": "461062a51b0c33fcbb0f47529f3a6fba", "text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.", "title": "" }, { "docid": "7dfef5a8009b8ccd9ddd3d60c3d52cdb", "text": "One long-term goal of machine learning research is to produc e methods that are applicable to highly complex tasks, such as perception ( vision, audition), reasoning, intelligent control, and other artificially intell igent behaviors. We argue that in order to progress toward this goal, the Machine Learn ing community must endeavor to discover algorithms that can learn highly compl ex functions, with minimal need for prior knowledge, and with minimal human interv ention. We present mathematical and empirical evidence suggesting that many p opular approaches to non-parametric learning, particularly kernel methods, are fundamentally limited in their ability to learn complex high-dimensional fun ctions. Our analysis focuses on two problems. First, kernel machines are shallow architectures , in which one large layer of simple template matchers i followed by a single layer of trainable coefficients. We argue that shallow architectu res can be very inefficient in terms of required number of computational elements a d examples. Second, we analyze a limitation of kernel machines with a local k ernel, linked to the curse of dimensionality, that applies to supervised, unsup ervised (manifold learning) and semi-supervised kernel machines. Using empirical esults on invariant image recognition tasks, kernel methods are compared with deep architectures , in which lower-level features or concepts are progressively c ombined into more abstract and higher-level representations. We argue that dee p architectures have the potential to generalize in non-local ways, i.e., beyond imm ediate neighbors, and that this is crucial in order to make progress on the kind of co mplex tasks required for artificial intelligence.", "title": "" } ]
scidocsrr
2c8e81e0f78e1db5e571557dec3301c2
Gamification - A Structured Analysis
[ { "docid": "372ab07026a861acd50e7dd7c605881d", "text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.", "title": "" }, { "docid": "633c906446a11252c3ab9e0aad20189c", "text": "The term \" gamification \" is generally used to denote the application of game mechanisms in non‐gaming environments with the aim of enhancing the processes enacted and the experience of those involved. In recent years, gamification has become a catchword throughout the fields of education and training, thanks to its perceived potential to make learning more motivating and engaging. This paper is an attempt to shed light on the emergence and consolidation of gamification in education/training. It reports the results of a literature review that collected and analysed around 120 papers on the topic published between 2011 and 2014. These originate from different countries and deal with gamification both in training contexts and in formal educational, from primary school to higher education. The collected papers were analysed and classified according to various criteria, including target population, type of research (theoretical vs experimental), kind of educational contents delivered, and the tools deployed. The results that emerge from this study point to the increasing popularity of gamification techniques applied in a wide range of educational settings. At the same time, it appears that over the last few years the concept of gamification has become more clearly defined in the minds of researchers and practitioners. Indeed, until fairly recently the term was used by many to denote the adoption of game artefacts (especially digital ones) as educational tools for learning a specific subject such as algebra. In other words, it was used as a synonym of Game Based Learning (GBL) rather than to identify an educational strategy informing the overall learning process, which is treated globally as a game or competition. However, this terminological confusion appears only in a few isolated cases in this literature review, suggesting that a certain level of taxonomic and epistemological convergence is underway.", "title": "" } ]
[ { "docid": "489aa160c450539b50c63c6c3c6993ab", "text": "Adequacy of citations is very important for a scientific paper. However, it is not an easy job to find appropriate citations for a given context, especially for citations in different languages. In this paper, we define a novel task of cross-language context-aware citation recommendation, which aims at recommending English citations for a given context of the place where a citation is made in a Chinese paper. This task is very challenging because the contexts and citations are written in different languages and there exists a language gap when matching them. To tackle this problem, we propose the bilingual context-citation embedding algorithm (i.e. BLSRec-I), which can learn a low-dimensional joint embedding space for both contexts and citations. Moreover, two advanced algorithms named BLSRec-II and BLSRec-III are proposed by enhancing BLSRec-I with translation results and abstract information, respectively. We evaluate the proposed methods based on a real dataset that contains Chinese contexts and English citations. The results demonstrate that our proposed algorithms can outperform a few baselines and the BLSRec-II and BLSRec-III methods can outperform the BLSRec-I method.", "title": "" }, { "docid": "2761ebc7958e27cad7972fd1b9f027dc", "text": "In this work we describe the design, implementation and evaluation of a novel eye tracker for context-awareness and mobile HCI applications. In contrast to common systems using video cameras, this compact device relies on Electrooculography (EOG). It consists of goggles with dry electrodes integrated into the frame and a small pocket-worn component with a DSP for real-time EOG signal processing. The device is intended for wearable and standalone use: It can store data locally for long-term recordings or stream processed EOG signals to a remote device over Bluetooth. We describe how eye gestures can be efficiently recognised from EOG signals for HCI purposes. In an experiment conducted with 11 subjects playing a computer game we show that 8 eye gestures of varying complexity can be continuously recognised with equal performance to a state-of-the-art video-based system. Physical activity leads to artefacts in the EOG signal. We describe how these artefacts can be removed using an adaptive filtering scheme and characterise this approach on a 5-subject dataset. In addition to explicit eye movements for HCI, we discuss how the analysis of unconscious eye movements may eventually allow to deduce information on user activity and context not available with current sensing modalities.", "title": "" }, { "docid": "e3d9d30900b899bcbf54cbd1b5479713", "text": "A new test method has been implemented for testing the EMC performance of small components like small connectors and IC's, mainly used in mobile applications. The test method is based on the EMC-stripline method. Both emission and immunity can be tested up to 6GHz, based on good RF matching conditions and with high field strengths.", "title": "" }, { "docid": "59718c2e471dfaf0fb7463a89312813a", "text": "Many large Internet websites are accessed by users anonymously, without requiring registration or logging-in. However, to provide personalized service these sites build anonymous, yet persistent, user models based on repeated user visits. Cookies, issued when a web browser first visits a site, are typically employed to anonymously associate a website visit with a distinct user (web browser). However, users may reset cookies, making such association short-lived and noisy. In this paper we propose a solution to the cookie churn problem: a novel algorithm for grouping similar cookies into clusters that are more persistent than individual cookies. Such clustering could potentially allow more robust estimation of the number of unique visitors of the site over a certain long time period, and also better user modeling which is key to plenty of web applications such as advertising and recommender systems.\n We present a novel method to cluster browser cookies into groups that are likely to belong to the same browser based on a statistical model of browser visitation patterns. We address each step of the clustering as a binary classification problem estimating the probability that two different subsets of cookies belong to the same browser. We observe that our clustering problem is a generalized interval graph coloring problem, and propose a greedy heuristic algorithm for solving it. The scalability of this method allows us to cluster hundreds of millions of browser cookies and provides significant improvements over baselines such as constrained K-means.", "title": "" }, { "docid": "49a90af27457eb2acbbcc2ffec3f2c5c", "text": "In this work, we propose a new framework for recognizing RGB images captured by the conventional cameras by leveraging a set of labeled RGB-D data, in which the depth features can be additionally extracted from the depth images. We formulate this task as a new unsupervised domain adaptation (UDA) problem, in which we aim to take advantage of the additional depth features in the source domain and also cope with the data distribution mismatch between the source and target domains. To effectively utilize the additional depth features, we seek two optimal projection matrices to map the samples from both domains into a common space by preserving as much as possible the correlations between the visual features and depth features. To effectively employ the training samples from the source domain for learning the target classifier, we reduce the data distribution mismatch by minimizing the Maximum Mean Discrepancy (MMD) criterion, which compares the data distributions for each type of feature in the common space. Based on the above two motivations, we propose a new SVM based objective function to simultaneously learn the two projection matrices and the optimal target classifier in order to well separate the source samples from different classes when using each type of feature in the common space. An efficient alternating optimization algorithm is developed to solve our new objective function. Comprehensive experiments for object recognition and gender recognition demonstrate the effectiveness of our proposed approach for recognizing RGB images by learning from RGB-D data.", "title": "" }, { "docid": "c8ca57db545f2d1f70f3640651bb3e79", "text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.", "title": "" }, { "docid": "b6c0228cce65009d4d56ce8fcebe083c", "text": "In this tutorial, we give an introduction to the field of and state of the art in music information retrieval (MIR). The tutorial particularly spotlights the question of music similarity, which is an essential aspect in music retrieval and recommendation. Three factors play a central role in MIR research: (1) the music content, i.e., the audio signal itself, (2) the music context, i.e., metadata in the widest sense, and (3) the listeners and their contexts, manifested in user-music interaction traces. We review approaches that extract features from all three data sources and combinations thereof and show how these features can be used for (large-scale) music indexing, music description, music similarity measurement, and recommendation. These methods are further showcased in a number of popular music applications, such as automatic playlist generation and personalized radio stationing, location-aware music recommendation, music search engines, and intelligent browsing interfaces. Additionally, related topics such as music identification, automatic music accompaniment and score following, and search and retrieval in the music production domain are discussed.", "title": "" }, { "docid": "024bd96e78ca1921e94a6df27084dd11", "text": "Prescription fraud is a main problem that causes substantial monetary loss in health care systems. We aimed to develop a model for detecting cases of prescription fraud and test it on real world data from a large multi-center medical prescription database. Conventionally, prescription fraud detection is conducted on random samples by human experts. However, the samples might be misleading and manual detection is costly. We propose a novel distance based on data-mining approach for assessing the fraudulent risk of prescriptions regarding cross-features. Final tests have been conducted on adult cardiac surgery database. The results obtained from experiments reveal that the proposed model works considerably well with a true positive rate of 77.4% and a false positive rate of 6% for the fraudulent medical prescriptions. The proposed model has the potential advantages including on-line risk prediction for prescription fraud, off-line analysis of high-risk prescriptions by human experts, and self-learning ability by regular updates of the integrative data sets. We conclude that incorporating such a system in health authorities, social security agencies and insurance companies would improve efficiency of internal review to ensure compliance with the law, and radically decrease human-expert auditing costs.", "title": "" }, { "docid": "dba804ec55201a683e8f4d82dbd15b6a", "text": "We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.", "title": "" }, { "docid": "7754aa9e4978b28c00a739d4918e3b3a", "text": "This paper considers two dimensional valence-arousal model. Pictorial stimuli of International Affective Picture Systems were chosen for emotion elicitation. Physiological signals like, Galvanic Skin Response, Heart Rate, Respiration Rate and Skin Temperature were measured for accessing emotional responses. The experimental procedure uses non-invasive sensors for signal collection. A group of healthy volunteers was shown four types of emotional stimuli categorized as High Valence High Arousal, High Valence Low Arousal, Low Valence High Arousal and Low Valence Low Arousal for around thirty minutes for emotion elicitation. Linear and Quadratic Discriminant Analysis are used and compared to the emotional class classification. Classification of stimuli into one of the four classes has been attempted on the basis of measurements on responses of experimental subjects. If classification is restricted within the responses of a specific individual, the classification results show high accuracy. However, if the problem is extended to entire population, the accuracy drops significantly.", "title": "" }, { "docid": "647ede4f066516a0343acef725e51d01", "text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.", "title": "" }, { "docid": "9a6ee40c3cd66ade4c9e1401505ec321", "text": "Secretion of saliva to aid swallowing and digestion is an important physiological function found in many vertebrates and invertebrates. Pavlov reported classical conditioning of salivation in dogs a century ago. Conditioning of salivation, however, has been so far reported only in dogs and humans, and its underlying neural mechanisms remain elusive because of the complexity of the mammalian brain. We previously reported that, in cockroaches Periplaneta americana, salivary neurons that control salivation exhibited increased responses to an odor after conditioning trials in which the odor was paired with sucrose solution. However, no direct evidence of conditioning of salivation was obtained. In this study, we investigated the effects of conditioning trials on the level of salivation. Untrained cockroaches exhibited salivary responses to sucrose solution applied to the mouth but not to peppermint or vanilla odor applied to an antenna. After differential conditioning trials in which an odor was paired with sucrose solution and another odor was presented without pairing with sucrose solution, sucrose-associated odor induced an increase in the level of salivation, but the odor presented alone did not. The conditioning effect lasted for one day after conditioning trials. This study demonstrates, for the first time, classical conditioning of salivation in species other than dogs and humans, thereby providing the first evidence of sophisticated neural control of autonomic function in insects. The results provide a useful model system for studying cellular basis of conditioning of salivation in the simpler nervous system of insects.", "title": "" }, { "docid": "6222f6b36a094540d1033b77db1efac0", "text": "Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain classification, comparing deep learning based approaches with conventional ones like conditional random fields. This paper proposes a holistic multi-domain, multi-task (i.e. slot filling, domain and intent detection) modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system, demonstrating the distinctive power of deep learning methods, namely bi-directional recurrent neural network (RNN) with long-short term memory (LSTM) cells (RNN-LSTM) to handle such complexity. The contributions of the presented work are three-fold: (i) we propose an RNN-LSTM architecture for joint modeling of slot filling, intent determination, and domain classification; (ii) we build a joint multi-domain model enabling multi-task deep learning where the data from each domain reinforces each other; (iii) we investigate alternative architectures for modeling lexical context in spoken language understanding. In addition to the simplicity of the single model framework, experimental results show the power of such an approach on Microsoft Cortana real user data over alternative methods based on single domain/task deep learning.", "title": "" }, { "docid": "a306ea0a425a00819b81ea7f52544cfb", "text": "Early research in electronic markets seemed to suggest that E-Commerce transactions would result in decreased costs for buyers and sellers alike, and would therefore ultimately lead to the elimination of intermediaries from electronic value chains. However, a careful analysis of the structure and functions of electronic marketplaces reveals a different picture. Intermediaries provide many value-adding functions that cannot be easily substituted or ‘internalised’ through direct supplier-buyer dealings, and hence mediating parties may continue to play a significant role in the E-Commerce world. In this paper we provide an analysis of the potential roles of intermediaries in electronic markets and we articulate a number of hypotheses for the future of intermediation in such markets. Three main scenarios are discussed: the disintermediation scenario where market dynamics will favour direct buyer-seller transactions, the reintermediation scenario where traditional intermediaries will be forced to differentiate themselves and reemerge in the electronic marketplace, and the cybermediation scenario where wholly new markets for intermediaries will be created. The analysis suggests that the likelihood of each scenario dominating a given market is primarily dependent on the exact functions that intermediaries play in each case. A detailed discussion of such functions is presented in the paper, together with an analysis of likely outcomes in the form of a contingency model for intermediation in electronic markets.", "title": "" }, { "docid": "318aa0dab44cca5919100033aa692cd9", "text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.", "title": "" }, { "docid": "742808e23275a17591e3700fe21319a8", "text": "Digital games have become a key player in the entertainment industry, attracting millions of new players each year. In spite of that, novice players may have a hard time when playing certain types of games, such as MOBAs and MMORPGs, due to their steep learning curves and not so friendly online communities. In this paper, we present an approach to help novice players in MOBA games overcome these problems. An artificial intelligence agent plays alongside the player analysing his/her performance and giving tips about the game. Experiments performed with the game League of Legends show the potential of this approach.", "title": "" }, { "docid": "e7fb9b39e0fee3924ff67ea74c85a36f", "text": "A method of using a simple single parasitic element to significantly enhance a 3-dB axial-ratio (AR) bandwidth of a crossed dipole antenna is presented. This approach is verified by introducing one bowtie-shaped parasitic element between two arms of crossed bowtie dipoles to generate additional resonance and thereby significantly broadening AR bandwidth. The final design, with an overall size of 0.79 <italic>λ<sub>o</sub></italic> × 0.79 <italic> λ<sub>o</sub></italic> × 0.27 <italic>λ<sub>o</sub></italic> (<italic>λ<sub>o</sub></italic> is the free-space wavelength of circular polarized center frequency), resulted in very large measured –10-dB impedance and 3-dB AR bandwidths ranging from 1.9 to 3.9 GHz (∼68.9%) and from 2.05 to 3.75 GHz (∼58.6%), respectively. In addition, the antenna also yielded a right-hand circular polarization with stable far-field radiation patterns and an average broadside gain of approximately 8.2 dBic.", "title": "" }, { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" }, { "docid": "10a6bccb77b6b94149c54c9e343ceb6c", "text": "Clone detectors find similar code fragments (i.e., instances of code clones) and report large numbers of them for industrial systems. To maintain or manage code clones, developers often have to investigate differences of multiple cloned code fragments. However,existing program differencing techniques compare only two code fragments at a time. Developers then have to manually combine several pairwise differencing results. In this paper, we present an approach to automatically detecting differences across multiple clone instances. We have implemented our approach as an Eclipse plugin and evaluated its accuracy with three Java software systems. Our evaluation shows that our algorithm has precision over 97.66% and recall over 95.63% in three open source Java projects. We also conducted a user study of 18 developers to evaluate the usefulness of our approach for eight clone-related refactoring tasks. Our study shows that our approach can significantly improve developers’performance in refactoring decisions, refactoring details, and task completion time on clone-related refactoring tasks. Automatically detecting differences across multiple clone instances also opens opportunities for building practical applications of code clones in software maintenance, such as auto-generation of application skeleton, intelligent simultaneous code editing.", "title": "" }, { "docid": "c0fa95fb2921b540722af806cb198f95", "text": "Spin transfer torque-based magnetic random access memory (STT-MRAM) is considered as one of the most promising candidates for the next generation of nonvolatile memories; however, its storage density and reliability are currently uncompetitive compared with other nonvolatile memories (e.g., NAND flash). In this paper, a dual-functional memory cell structure, named DFSTT-MRAM, is proposed by stacking multiple magnetic tunnel junctions (MTJs) on top of the CMOS access transistor. In such a structure, the cell can be dynamically configured between two possible functional modes, i.e., high-reliability mode (HR-mode) and multilevel cell mode (MLC-mode), based on the data requirements of the addressed applications. The DFSTT-MRAM cell was electrically modeled based on the perpendicular magnetic anisotropy CoFeB/MgO/CoFeB MTJ integrating the STT stochastic switching behaviors. Transient and Monte Carlo simulations were then performed to evaluate its MLC functionality and reliability performance. Our evaluation results show that the DFSTT-MRAM cell can indeed realize MLC capability providing proper write/read control at the MLC-mode and enhance the intrinsic cell reliability by several orders of magnitude when operating at the HR-mode. This cell structure provides a flexible memory cell design for future advanced applications, such as high-density nonvolatile memory and neuromorphic circuits.", "title": "" } ]
scidocsrr
19c9839ac00bd9241680722c55b0132a
Protocols for self-organization of a wireless sensor network
[ { "docid": "3be38e070678e358e23cb81432033062", "text": "W ireless integrated network sensors (WINS) provide distributed network and Internet access to sensors, controls, and processors deeply embedded in equipment, facilities, and the environment. The WINS network represents a new monitoring and control capability for applications in such industries as transportation, manufacturing, health care, environmental oversight, and safety and security. WINS combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. Recent advances in integrated circuit technology have enabled construction of far more capable yet inexpensive sensors, radios, and processors, allowing mass production of sophisticated systems linking the physical world to digital data networks [2–5]. Scales range from local to global for applications in medicine, security, factory automation, environmental monitoring, and condition-based maintenance. Compact geometry and low cost allow WINS to be embedded and distributed at a fraction of the cost of conventional wireline sensor and actuator systems. WINS opportunities depend on development of a scalable, low-cost, sensor-network architecture. Such applications require delivery of sensor information to the user at a low bit rate through low-power transceivers. Continuous sensor signal processing enables the constant monitoring of events in an environment in which short message packets would suffice. Future applications of distributed embedded processors and sensors will require vast numbers of devices. Conventional methods of sensor networking represent an impractical demand on cable installation and network bandwidth. Processing at the source would drastically reduce the financial, computational, and management burden on communication system", "title": "" } ]
[ { "docid": "51b1f69c4bdc5fd034f482ad9ffa4549", "text": "The synapse is the focus of experimental research and theory on the cellular mechanisms of nervous system plasticity and learning, but recent research is expanding the consideration of plasticity into new mechanisms beyond the synapse, notably including the possibility that conduction velocity could be modifiable through changes in myelin to optimize the timing of information transmission through neural circuits. This concept emerges from a confluence of brain imaging that reveals changes in white matter in the human brain during learning, together with cellular studies showing that the process of myelination can be influenced by action potential firing in axons. This Opinion article summarizes the new research on activity-dependent myelination, explores the possible implications of these studies and outlines the potential for new research.", "title": "" }, { "docid": "cb4855d39d21bd525bd929b551dede7e", "text": "There is an established and growing body of evidence highlighting that music can influence behavior across a range of diverse domains (Miell, MacDonald, & Hargreaves 2005). One area of interest is the monitoring of \"internal timing mechanisms\", with features such as tempo, liking, perceived affective nature and everyday listening contexts implicated as important (North & Hargreaves, 2008). The current study addresses these issues by comparing the effects of self-selected and experimenter-selected music (fast and slow) on actual and perceived performance of a driving game activity. Seventy participants completed three laps of a driving game in seven sound conditions: (1) silence; (2) car sounds; (3) car sounds with self-selected music, and car sounds with experimenter-selected music; (4) high-arousal (70 bpm); (5) high-arousal (130 bpm); (6) low-arousal (70 bpm); and (7) low-arousal (130 bpm) music. Six performance measures (time, accuracy, speed, and retrospective perception of these), and four experience measures (perceived distraction, liking, appropriateness and enjoyment) were taken. Exposure to self-selected music resulted in overestimation of elapsed time and inaccuracy, while benefiting accuracy and experience. In contrast, exposure to experimenter-selected music resulted in poorest performance and experience. Increasing the tempo of experimenter-selected music resulted in faster performance and increased inaccuracy for high-arousal music, but did not impact experience. It is suggested that personal meaning and subjective associations connected to self-selected music promoted increased engagement with the activity, overriding detrimental effects attributed to unfamiliar, less liked and less appropriate experimenter-selected music.", "title": "" }, { "docid": "1ecb4a3e5651bf47febc45c4a1f747a2", "text": "Cloud services and applications prove indispensable amid today’s modern utility-based computing. The cloud has displayed a disruptive and growing impact on everyday computing tasks. However, facilitating the orchestration of cloud resources to build such cloud services and applications is yet to unleash its entire magnitude of power. Accordingly, it is paramount to devise a unified and comprehensive analysis framework to accelerate fundamental understanding of cloud resource orchestration in terms of concepts, paradigms, languages, models, and tools. This framework is essential to empower effective research, comprehension, comparison, and selection of cloud resource orchestration models, languages, platforms, and tools. This article provides such a comprehensive framework while analyzing the relevant state of the art in cloud resource orchestration from a novel and holistic viewpoint.", "title": "" }, { "docid": "93d8c3204c37cd1762f50935dea05ab6", "text": "With the introduction of the Electric Health Records (EHR), large amounts of digital data become available for analysis and decision support [1]. These data, as soon as carefully cleaned and well preprocessed, could enable a large variety of analysis and modeling tasks that can improve healthcare services and patients experience [2]. When physicians are prescribing treatments to a patient, they need to consider a large range of data variety and volume. These data might include patients’ genetic profiles and their entire historical clinical protocols. With the growing amounts of data decision making becomes increasingly complex. Machine learning based Clinical Decision Support (CDS) systems can be a solution to the data challenges [3] [4] [5]. Machine learning models and decision support systems have been proven to be capable of handling —and actually even profiting from— large amount of data in high dimensional space and with complex dependency characteristics. Some powerful machine learning models generate abstract and yet informative features from a usually sparse feature space.", "title": "" }, { "docid": "9b06026e998df745d820fbd835554b13", "text": "There have been significant advances in the field of Internet of Things (IoT) recently. At the same time there exists an ever-growing demand for ubiquitous healthcare systems to improve human health and well-being. In most of IoT-based patient monitoring systems, especially at smart homes or hospitals, there exists a bridging point (i.e., gateway) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks. These gateways have beneficial knowledge and constructive control over both the sensor network and the data to be transmitted through the Internet. In this paper, we exploit the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway. By taking responsibility for handling some burdens of the sensor network and a remote healthcare center, a Smart e-Health Gateway can cope with many challenges in ubiquitous healthcare systems such as energy efficiency, scalability, and reliability issues. A successful implementation of Smart e-Health Gateways enables massive deployment of ubiquitous health monitoring systems especially in clinical environments. We also present a case study of a Smart e-Health Gateway called UTGATE where some of the discussed higher-level features have been implemented. Our proof-of-concept design demonstrates an IoT-based health monitoring system with enhanced overall system energy efficiency, performance, interoperability, security, and reliability.", "title": "" }, { "docid": "c96dbf6084741f8b529e8a1de19cf109", "text": "Metamorphic testing is an advanced technique to test programs without a true test oracle such as machine learning applications. Because these programs have no general oracle to identify their correctness, traditional testing techniques such as unit testing may not be helpful for developers to detect potential bugs. This paper presents a novel system, Kabu, which can dynamically infer properties of methods' states in programs that describe the characteristics of a method before and after transforming its input. These Metamorphic Properties (MPs) are pivotal to detecting potential bugs in programs without test oracles, but most previous work relies solely on human effort to identify them and only considers MPs between input parameters and output result (return value) of a program or method. This paper also proposes a testing concept, Metamorphic Differential Testing (MDT). By detecting different sets of MPs between different versions for the same method, Kabu reports potential bugs for human review. We have performed a preliminary evaluation of Kabu by comparing the MPs detected by humans with the MPs detected by Kabu. Our preliminary results are promising: Kabu can find more MPs than human developers, and MDT is effective at detecting function changes in methods.", "title": "" }, { "docid": "38ae190a4a81a33dd818403723505f29", "text": "We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or stateof-the-art accuracy at moderate computational costs.", "title": "" }, { "docid": "2e9b98fbb1fa15020b374dbd48fb5adc", "text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.", "title": "" }, { "docid": "92e379d5f1dea6c9368e0ae6bd3005f0", "text": "With the General Data Protection Regulation there will be a legal obligation for controllers to conduct a Data Protection Impact Assessment for the first time. This paper examines the new provisions in detail and examines ways for their successful implementation. It proposes a process which operationalizes established requirements ensuring the appropriate attention to fundamental rights as warranted by the GDPR, incorporates the legislation’s new requirements and can be adapted to suit the controller’s needs.", "title": "" }, { "docid": "c4bcdd191b4d04368f12c967b361a7e1", "text": "Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "5591247b2e28f436da302757d3f82122", "text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.", "title": "" }, { "docid": "0df20e2c58d625753518d5b906653c88", "text": "A novel broadband waveguide diplexer design serving two frequency bands with up to 20% bandwidth out of an overall frequency band of more than 65% (3.5 to 7.15GHz) is established. Since the overall extreme frequency demands exceed the operating bands of standard rectangular waveguide types with sole fundamental mode propagation, three different waveguides are considered for the interfacing, namely a dual ridge type at the common port and different standard rectangular ones (WR229, WR137) at the ports serving the assigned frequency bands. A conductor loaded H-plane T-junction is used for the common branching facing at one port the common dual ridge waveguide interface of the diplexer. The other ports of the T-junction are directed to ridge waveguide filter types for the separation of the dedicated wide passbands with up to 20% bandwidth. The opposite ports of these filters are directly adapting to the assigned standard waveguide types for the dedicated served frequency band. It is shown that this approach satisfies high performance properties with a very compact implementation. The overall design has been supported with a full wave CAD method. The measured characteristics exhibit good coincidence with the computed ones and thus validate the broadband design approach. The high performance demands qualify this diplexer design for the implementation in modular feed systems for large radio antennas to provide multi access for serving several frequency bands simultaneously.", "title": "" }, { "docid": "a93f72f0c4cee436f86221d04fa96bc6", "text": "We propose a model to learn visually grounded word embeddings (vis-w2v) to capture visual notions of semantic relatedness. While word embeddings trained using text have been extremely successful, they cannot uncover notions of semantic relatedness implicit in our visual world. For instance, although \"eats\" and \"stares at\" seem unrelated in text, they share semantics visually. When people are eating something, they also tend to stare at the food. Grounding diverse relations like \"eats\" and \"stares at\" into vision remains challenging, despite recent progress in vision. We note that the visual grounding of words depends on semantics, and not the literal pixels. We thus use abstract scenes created from clipart to provide the visual grounding. We find that the embeddings we learn capture fine-grained, visually grounded notions of semantic relatedness. We show improvements over text-only word embeddings (word2vec) on three tasks: common-sense assertion classification, visual paraphrasing and text-based image retrieval. Our code and datasets are available online.", "title": "" }, { "docid": "15e034d722778575b43394b968be19ad", "text": "Elections are contests for the highest stakes in national politics and the electoral system is a set of predetermined rules for conducting elections and determining their outcome. Thus defined, the electoral system is distinguishable from the actual conduct of elections as well as from the wider conditions surrounding the electoral contest, such as the state of civil liberties, restraints on the opposition and access to the mass media. While all these aspects are of obvious importance to free and fair elections, the main interest of this study is the electoral system.", "title": "" }, { "docid": "baae0ce9d52f47386447b729ff174b62", "text": "Receptor for advanced glycation end products (RAGE) is a member of the immunoglobulin superfamily of cell surface molecules and engages diverse ligands relevant to distinct pathological processes. One class of RAGE ligands includes glycoxidation products, termed advanced glycation end products, which occur in diabetes, at sites of oxidant stress in tissues, and in renal failure and amyloidoses. RAGE also functions as a signal transduction receptor for amyloid beta peptide, known to accumulate in Alzheimer disease in both affected brain parenchyma and cerebral vasculature. Interaction of RAGE with these ligands enhances receptor expression and initiates a positive feedback loop whereby receptor occupancy triggers increased RAGE expression, thereby perpetuating another wave of cellular activation. Sustained expression of RAGE by critical target cells, including endothelium, smooth muscle cells, mononuclear phagocytes, and neurons, in proximity to these ligands, sets the stage for chronic cellular activation and tissue damage. In a model of accelerated atherosclerosis associated with diabetes in genetically manipulated mice, blockade of cell surface RAGE by infusion of a soluble, truncated form of the receptor completely suppressed enhanced formation of vascular lesions. Amelioration of atherosclerosis in these diabetic/atherosclerotic animals by soluble RAGE occurred in the absence of changes in plasma lipids or glycemia, emphasizing the contribution of a lipid- and glycemia-independent mechanism(s) to atherogenesis, which we postulate to be interaction of RAGE with its ligands. Future studies using mice in which RAGE expression has been genetically manipulated and with selective low molecular weight RAGE inhibitors will be required to definitively assign a critical role for RAGE activation in diabetic vasculopathy. However, sustained receptor expression in a microenvironment with a plethora of ligand makes possible prolonged receptor stimulation, suggesting that interaction of cellular RAGE with its ligands could be a factor contributing to a range of important chronic disorders.", "title": "" }, { "docid": "ce1384d061248cbb96e77ea482b2ba62", "text": "Preventable behaviors contribute to many life threatening health problems. Behavior-change technologies have been deployed to modify these, but such systems typically draw on traditional behavioral theories that overlook affect. We examine the importance of emotion tracking for behavior change. First, we conducted interviews to explore how emotions influence unwanted behaviors. Next, we deployed a system intervention, in which 35 participants logged information for a self-selected, unwanted behavior (e.g., smoking or overeating) over 21 days. 16 participants engaged in standard behavior tracking using a Fact-Focused system to record objective information about goals. 19 participants used an Emotion-Focused system to record emotional consequences of behaviors. Emotion-Focused logging promoted more successful behavior change and analysis of logfiles revealed mechanisms for success: greater engagement of negative affect for unsuccessful days and increased insight were key to motivating change. We present design implications to improve behavior-change technologies with emotion tracking.", "title": "" }, { "docid": "d5a9f4e5cf1f15a7e39e0b49e571b936", "text": "Article history: With the growth and evolu First received in February 6, 2005 and was under review for 9 months", "title": "" }, { "docid": "ec181b897706d101136dcbcef6e84de9", "text": "Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.", "title": "" }, { "docid": "d1357b2e247d521000169dce16f182ee", "text": "Camera shake or target movement often leads to undesired blur effects in videos captured by a hand-held camera. Despite significant efforts having been devoted to video-deblur research, two major challenges remain: 1) how to model the spatio-temporal characteristics across both the spatial domain (i.e., image plane) and the temporal domain (i.e., neighboring frames) and 2) how to restore sharp image details with respect to the conventionally adopted metric of pixel-wise errors. In this paper, to address the first challenge, we propose a deblurring network (DBLRNet) for spatial-temporal learning by applying a 3D convolution to both the spatial and temporal domains. Our DBLRNet is able to capture jointly spatial and temporal information encoded in neighboring frames, which directly contributes to the improved video deblur performance. To tackle the second challenge, we leverage the developed DBLRNet as a generator in the generative adversarial network (GAN) architecture and employ a content loss in addition to an adversarial loss for efficient adversarial training. The developed network, which we name as deblurring GAN, is tested on two standard benchmarks and achieves the state-of-the-art performance.", "title": "" } ]
scidocsrr
b51e2fccc2b89a51e7e7c0396af34ec9
Using Agile Methods in Software Product Development: A Case Study
[ { "docid": "b5c5f5a92d3c110ffb09540d33f555a8", "text": "Agile methods continue to gain popularity. In particular, the Scrum method appears to be on the verge of becoming a de-facto standard in the industry, leading the so called Agile movement. While there are success stories and recommendations, there is little scientifically valid evidence of the challenges in the adoption of Agile methods in general, and Scrum in particular. Little, if anything, is empirically known about the application and adoption of Scrum in a multi-team and multi-project situation. The authors carried out an ethnographically informed longitudinal case study in industrial settings and closely followed how the Scrum method was adopted in a 20-person department, working in a simultaneous multi-project R&D environment. Altogether 10 challenges pertinent to the case of multi-team multi-project Scrum adoption were identified in the study. The authors contend that these results carry great relevance for other industrial teams. Future research avenues arising from the study are indicated.", "title": "" }, { "docid": "22d17576fef96e5fcd8ef3dd2fb0cc5f", "text": "I n a previous article (\" Agile Software Development: The Business of Innovation , \" Computer, Sept. 2001, pp. 120-122), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style. Over recent decades, while market forces, systems requirements, implementation technology, and project staff were changing at a steadily increasing rate, a different development style showed its advantages over the traditional one. This agile style of development directly addresses the problems of rapid change. A dominant idea in agile development is that the team can be more effective in responding to change if it can • reduce the cost of moving information between people, and • reduce the elapsed time between making a decision to seeing the consequences of that decision. To reduce the cost of moving information between people, the agile team works to • place people physically closer, • replace documents with talking in person and at whiteboards, and • improve the team's amicability—its sense of community and morale— so that people are more inclined to relay valuable information quickly. To reduce the time from decision to feedback, the agile team • makes user experts available to the team or, even better, part of the team and • works incrementally. Making user experts available as part of the team gives developers rapid feedback on the implications to the user of their design choices. The user experts, seeing the growing software in its earliest stages, learn both what the developers misunderstood and also which of their requests do not work as well in practice as they had thought. The term agile, coined by a group of people experienced in developing software this way, has two distinct connotations. The first is the idea that the business and technology worlds have become turbulent , high speed, and uncertain, requiring a process to both create change and respond rapidly to change. The first connotation implies the second one: An agile process requires responsive people and organizations. Agile development focuses on the talents and skills of individuals and molds process to specific people and teams, not the other way around. The most important implication to managers working in the agile manner is that it places more emphasis on people factors in the project: amicability, talent, skill, and communication. These qualities become a primary concern …", "title": "" }, { "docid": "4073fc19e108b11c80b71dfb9cb73268", "text": "In today's fast-paced, fiercely competitive world of commercial new product development, speed and flexibility are essential. Companies are increasingly realizing that the old, sequential approach to developing new products simply won't get the job done. Instead, companies in Japan and the United States are using a holistic method—as in rugby, the ball gets passed within the team as it moves as a unit up the field. This holistic approach has six characteristics: built-in instability, self-organizing project teams, overlapping development phases, \"multilearning,\" subtle control, and organizational transfer of learning. The six pieces fit together like a jigsaw puzzle, forming a fast and flexible process for new product development, fust as important, the new approach can act as a change agent: it is a vehicle for introducing creative, market-driven ideas and processes into an old, rigid organization. Mr. Takeuchi is an associate professor and Mr. Nonaka, a professor at Hitotsubashi University in fapan. Mr. Takeuchi's research has focused on marketing and global competition. Mr Nonaka has published widely in Japan on organizations, strategy, and marketing. The rules of the game in new product development are changing. Many companies have discovered that it takes more than the accepted basics of high quality, lov^ cost, and differentiation to excel in today's competitive market. It also takes speed and flexibility. This change is reflected in the emphasis companies are placing on new products as a source of new sales and profits. At 3M, for example, products less than five years old account for 25%* of sales. A 1981 survey of 700 U.S. companies indicated that new products would account for one-third of all profits in the 1980s, an increase from one-fifth in the 1970s.' This new emphasis on speed and flexibility calls for a different approach for managing new product development. The traditional sequential or \"relay race\" approach to product developmentexemplified by the National Aeronautics and Space Administration's phased program planning (PPP) system-may conflict with the goals of maximum speed and flexibility. Instead, a holistic or \"rugby\" approach-where a team tries to go the distance as a unit, passing the ball back and forth-may better serve today's competitive requirements. Under the old approach, a product development process moved like a relay race, with one group of functional specialists passing the baton to the next group. The project went sequentially from phase to phase: concept development, feasibility testing, product design, development process, pilot produc-", "title": "" } ]
[ { "docid": "6080612b8858d633c3f63a3d019aef58", "text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.", "title": "" }, { "docid": "90dfa19b821aeab985a96eba0c3037d3", "text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.", "title": "" }, { "docid": "55b009f860b96414b944f670efe61e44", "text": "In this paper we present NLML (Natural Language Markup Language), a markup language to describe the syntactic and semantic structure of any grammatically correct English expression. At first the related works are analyzed to demonstrate the necessity of the NLML: simple form, easy management and direct storage. Then the description of the English grammar with NLML is introduced in details in three levels: sentences (with different complexities, voices, moods, and tenses), clause (relative clause and noun clause) and phrase (noun phrase, verb phrase, prepositional phrase, adjective phrase, adverb phrase and predicate phrase). At last the application fields of the NLML in NLP are shown with two typical examples: NLOJM (Natural Language Object Modal in Java) and NLDB (Natural Language Database).", "title": "" }, { "docid": "3dfe5dbdd83f0c56f403884f38420ae7", "text": "There is an increasing interest in studying control systems employing multiple sensors and actuators that are geographically distributed. Communication is an important component of these distributed and networked control systems. Hence, there is a need to understand the interactions between the control components and the communication components of the distributed system. In this paper, we formulate a control problem with a communication channel connecting the sensor to the controller. Our task involves designing the channel encoder and channel decoder along with the controller to achieve different control objectives. We provide upper and lower bounds on the channel rate required to achieve these different control objectives. In many cases, these bounds are tight. In doing so, we characterize the \"information complexity\" of different control objectives.", "title": "" }, { "docid": "18b7c2a57ab593810574a6975d6dc72e", "text": "Explored the factors that influence knowledge and attitudes toward anemia in pregnancy (AIP) in southeastern Nigeria. We surveyed 1500 randomly selected women who delivered babies within 6 months of the survey using a questionnaire. Twelve focus group discussions were held with the grandmothers and fathers of the new babies, respectively. Six in-depth interviews were held with health workers in the study communities. Awareness of AIP was high. Knowledge of its prevention and management was poor with a median score of 10 points on a 50-point scale. Living close to a health facility (p = 0.031), having post-secondary education (p <0.001), being in paid employment (p = 0.017) and being older (p = 0.027) influenced knowledge of AIP. Practices for the prevention and management of AIP were affected by a high level of education (p = 0.034) and having good knowledge of AIP issues (p <0.001). The qualitative data revealed that unorthodox means were employed in response to anemia in pregnancy. This is often delayed until complications set in. Many viewed anemia as a normal phenomenon among pregnant women. AIP awareness is high among the populations. However, management is poor because of poor knowledge of signs and timely appropriate treatment. Prompt and appropriate management of AIP is germane for positive pregnancy outcomes. Anemia-related public education is an urgent need in Southeast Nigeria. Extra consideration of the diverse social development levels of the populations should be taken into account when designing new and improving current prevention and management programs for anemia in pregnancy.", "title": "" }, { "docid": "c1cc1fae0f01d148454e208f48b572a3", "text": "Wavelet transform has been widely used in many signal and image processing applications. Due to its wide adoption for time-critical applications, such as streaming and real-time signal processing, many acceleration techniques were developed during the past decade. Recently, the graphics processing unit (GPU) has gained much attention for accelerating computationally-intensive problems and many solutions of GPU-based discrete wavelet transform (DWT) have been introduced, but most of them did not fully leverage the potential of the GPU. In this paper, we present various state-of-the-art GPU optimization strategies in DWT implementation, such as leveraging shared memory, registers, warp shuffling instructions, and thread- and instruction-level parallelism (TLP, ILP), and finally elaborate our hybrid approach to further boost up its performance. In addition, we introduce a novel mixed-band memory layout for Haar DWT, where multi-level transform can be carried out in a single fused kernel launch. As a result, unlike recent GPU DWT methods that focus mainly on maximizing ILP, we show that the optimal GPU DWT performance can be achieved by hybrid parallelism combining both TLP and ILP together in a mixed-band approach. We demonstrate the performance of our proposed method by comparison with other CPU and GPU DWT methods.", "title": "" }, { "docid": "3e5e7e38068da120639c3fcc80227bf8", "text": "The ferric reducing antioxidant power (FRAP) assay was recently adapted to a microplate format. However, microplate-based FRAP (mFRAP) assays are affected by sample volume and composition. This work describes a calibration process for mFRAP assays which yields data free of volume effects. From the results, the molar absorptivity (ε) for the mFRAP assay was 141,698 M(-1) cm(-1) for gallic acid, 49,328 M(-1) cm(-1) for ascorbic acid, and 21,606 M(-1) cm(-1) for ammonium ferrous sulphate. The significance of ε (M(-1) cm(-1)) is discussed in relation to mFRAP assay sensitivity, minimum detectable concentration, and the dimensionless FRAP-value. Gallic acid showed 6.6 mol of Fe(2+) equivalents compared to 2.3 mol of Fe(+2) equivalents for ascorbic acid. Application of the mFRAP assay to Manuka honey samples (rated 5+, 10+, 15+, and 18+ Unique Manuka Factor; UMF) showed that FRAP values (0.54-0.76 mmol Fe(2+) per 100g honey) were strongly correlated with UMF ratings (R(2)=0.977) and total phenols content (R(2) = 0.982)whilst the UMF rating was correlated with the total phenols (R(2) = 0.999). In conclusion, mFRAP assay results were successfully standardised to yield data corresponding to 1-cm spectrophotometer which is useful for quality assurance purposes. The antioxidant capacity of Manuka honey was found to be directly related to the UMF rating.", "title": "" }, { "docid": "66c548d14007f82d2ab1c5337965e2ae", "text": "The objective of this paper is to provide a review of recent advances in automatic vibration- and audio-based fault diagnosis in machinery using condition monitoring strategies. It presents the most valuable techniques and results in this field and highlights the most profitable directions of research to present. Automatic fault diagnosis systems provide greater security in surveillance of strategic infrastructures, such as electrical substations and industrial scenarios, reduce downtime of machines, decrease maintenance costs, and avoid accidents which may have devastating consequences. Automatic fault diagnosis systems include signal acquisition, signal processing, decision support, and fault diagnosis. The paper includes a comprehensive bibliography of more than 100 selected references which can be used by researchers working in this field.", "title": "" }, { "docid": "04e478610728f0aae76e5299c28da25a", "text": "Single image super resolution is one of the most important topic in computer vision and image processing research, many convolutional neural networks (CNN) based super resolution algorithms were proposed and achieved advanced performance, especially in recovering image details, in which PixelCNN is the most representative one. However, due to the intensive computation requirement of PixelCNN model, running time remains a major challenge, which limited its wider application. In this paper, several modifications are proposed to improve PixelCNN based recursive super resolution model. First, a discrete logistic mixture likelihood is adopted, then a cache structure for generating process is proposed, with these modifications, numerous redundant computations are removed without loss of accuracy. Finally, a partial generating network is proposed for higher resolution generation. Experiments on CelebA dataset demonstrate the effectiveness the superiority of the proposed method.", "title": "" }, { "docid": "74f148aaf1dd6ee1fbfb4338aded64bf", "text": "Complexity is crucial to characterize tasks performed by humans through computer systems. Yet, the theory and practice of crowdsourcing currently lacks a clear understanding of task complexity, hindering the design of effective and efficient execution interfaces or fair monetary rewards. To understand how complexity is perceived and distributed over crowdsourcing tasks, we instrumented an experiment where we asked workers to evaluate the complexity of 61 real-world re-instantiated crowdsourcing tasks. We show that task complexity, while being subjective, is coherently perceived across workers; on the other hand, it is significantly influenced by task type. Next, we develop a high-dimensional regression model, to assess the influence of three classes of structural features (metadata, content, and visual) on task complexity, and ultimately use them to measure task complexity. Results show that both the appearance and the language used in task description can accurately predict task complexity. Finally, we apply the same feature set to predict task performance, based on a set of 5 years-worth tasks in Amazon MTurk. Results show that features related to task complexity can improve the quality of task performance prediction, thus demonstrating the utility of complexity as a task modeling property.", "title": "" }, { "docid": "e55f8ad65250902a53b1bbfe6f16d26c", "text": "Automatic key phrase extraction has many important applications including but not limited to summarization, cataloging/indexing, feature extraction for clustering and classification, and data mining. This paper presents a simple, yet effective algorithm (KP-Miner) for achieving this task. The result of an experiment carried out to investigate the effectiveness of this algorithm is also presented. In this experiment the devised algorithm is applied to six different datasets consisting of 481 documents. The results are then compared to two existing sophisticated machine learning based automatic keyphrase extraction systems. The results of this experiment show that the devised algorithm is comparable to both systems", "title": "" }, { "docid": "a671c6eff981b5e3a0466e53f22c4521", "text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.", "title": "" }, { "docid": "74383319fc9dd814f77d8766fcf79a85", "text": "Although interactive learning puts the user into the loop, the learner remains mostly a black box for the user. Understanding the reasons behind queries and predictions is important when assessing how the learner works and, in turn, trust. Consequently, we propose the novel framework of explanatory interactive learning: in each step, the learner explains its interactive query to the user, and she queries of any active classifier for visualizing explanations of the corresponding predictions. We demonstrate that this can boost the predictive and explanatory powers of and the trust into the learned model, using text (e.g. SVMs) and image classification (e.g. neural networks) experiments as well as a user study.", "title": "" }, { "docid": "a688f040f616faff3db13be4b1c052df", "text": "Intracellular fucoidanase was isolated from the marine bacterium, Formosa algae strain KMM 3553. The first appearance of fucoidan enzymatic hydrolysis products in a cell-free extract was detected after 4 h of bacterial growth, and maximal fucoidanase activity was observed after 12 h of growth. The fucoidanase displayed maximal activity in a wide range of pH values, from 6.5 to 9.1. The presence of Mg2+, Ca2+ and Ba2+ cations strongly activated the enzyme; however, Cu2+ and Zn2+ cations had inhibitory effects on the enzymatic activity. The enzymatic activity of fucoidanase was considerably reduced after prolonged (about 60 min) incubation of the enzyme solution at 45 °C. The fucoidanase catalyzed the hydrolysis of fucoidans from Fucus evanescens and Fucus vesiculosus, but not from Saccharina cichorioides. The fucoidanase also did not hydrolyze carrageenan. Desulfated fucoidan from F. evanescens was hydrolysed very weakly in contrast to deacetylated fucoidan, which was hydrolysed more actively compared to the native fucoidan from F. evanescens. Analysis of the structure of the enzymatic products showed that the marine bacteria, F. algae, synthesized an α-l-fucanase with an endo-type action that is specific for 1→4-bonds in a polysaccharide molecule built up of alternating three- and four-linked α-l-fucopyranose residues sulfated mainly at position 2.", "title": "" }, { "docid": "1ce09062b1ced2cd643c04f7c075c4f1", "text": "We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.", "title": "" }, { "docid": "4800fd4c07c97f139d01f9d41398dd27", "text": "Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2% mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3% with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5%) to the ImageNet-supervised counterpart (24.4%) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.", "title": "" }, { "docid": "2ba975af095effcbbc4e98d7dc2172ec", "text": "People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.", "title": "" }, { "docid": "e4dab87627ca716325e411ea8c026cc6", "text": "Misregulated innate immune signaling and cell death form the basis of much human disease pathogenesis. Inhibitor of apoptosis (IAP) protein family members are frequently overexpressed in cancer and contribute to tumor cell survival, chemo-resistance, disease progression, and poor prognosis. Although best known for their ability to regulate caspases, IAPs also influence ubiquitin (Ub)-dependent pathways that modulate innate immune signaling via activation of nuclear factor κB (NF-κB). Recent research into IAP biology has unearthed unexpected roles for this group of proteins. In addition, the advances in our understanding of the molecular mechanisms that IAPs use to regulate cell death and innate immune responses have provided new insights into disease states and suggested novel intervention strategies. Here we review the functions assigned to those IAP proteins that act at the intersection of cell death regulation and inflammatory signaling.", "title": "" }, { "docid": "42616fa0c56be96e84dc86d463a926d3", "text": "Forensic dentistry delineates the overlap between the dental and the legal professions. Forensic identifi cations by their nature are multidisciplinary team eff orts. Odontologists can examine the structure of the teeth and jaws for clues that may support anthropological age estimates. Apart from dental identifi cation, forensic odontology is also applied in the investigation of crimes caused by dentition, such as bite marks. The importance of pedodontist in forensic odontology is to apply his expertise in various fi elds like child abuse and neglect, mass disaster, accidental and non-accidental oral trauma, age determination, and dental records. The aim of this paper is to discuss about the pedodontist perspective in forensic dentistry.", "title": "" }, { "docid": "282a6b06fb018fb7e2ec223f74345944", "text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.", "title": "" } ]
scidocsrr
b713b49d4da4b3c3367b9b14b5eb566c
IoT and Cloud Computing in Automation of Assembly Modeling Systems
[ { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" } ]
[ { "docid": "a722bc4688ec23f0547d192e5a41fc05", "text": "This study investigated the aggressive components of the dream content of 120 Spanish children and adolescents of 4 different age groups. The C. S. Hall and R. L. Van de Castle (1966) coding system was used to rate the number of dream characters and aggressions, and the content findings were analyzed via the indicators presented by G. W. Domhoff (1993, 1996, 2003). Results confirm the findings of previous studies of gender and age differences in dream content: Boys tend to have more aggressive dream content, which tends to decrease with age until reaching a pattern similar to the normative group; younger children, especially boys, tend to be victims of aggression more frequently than do older children. In addition, a data analysis procedure involving cumulative scoring of the aggression scale as well as nonparametric statistics yielded significant differences between boys and girls of the youngest group for severity of aggression.", "title": "" }, { "docid": "a1bff389a9a95926a052ded84c625a9e", "text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.", "title": "" }, { "docid": "27465b2c8ce92ccfbbda6c802c76838f", "text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.", "title": "" }, { "docid": "aa61fb7a263fc3e27914f3763c9ad464", "text": "Analog computation architectures such as artificial neural network models have received phenomenal attention lately. Massive parallel processing is natural in neural networks, which also meets the development trend of computer science. Because parallelism is a straight way to speed up computation. Thus, considering parallel algorithms on neural networks is quite reasonable. Neural network models have been shown to be able to find “good” solutions for some optimization problems in a short time [51. But, it cannot guarantee finding the real optimal solution except by techniques similar to simulated annealing. Under such circumstances, there is no systematic way to determine the annealing schedule and the time needed for convergence. Vergis et al. [ll] showed that analog computation can be simulated efficiently by a Turing machine in polynomial time. Unless P = NP, NP-complete problems cannot be solved by analog computation in polynomial time. But, from the view of time complexity, analog computation is better than digital computation in one perspective. The basic operation", "title": "" }, { "docid": "6b881473c6d4425c26b9de053c30b703", "text": "Current content-based video copy detection approaches mostly concentrate on the visual cues and neglect the audio information. In this paper, we attempt to tackle the video copy detection task resorting to audio information, which is equivalently important as well as visual information in multimedia processing. Firstly, inspired by bag-of visual words model, a bag-of audio words (BoA) representation is proposed to characterize each audio frame. Different from naive single-based modeling audio retrieval approaches, BoA is a high-level model due to its perceptual and semantical property. Within the BoA model, a coherency vocabulary indexing structure is adopted to achieve more efficient and effective indexing than single vocabulary of standard BoW model. The coherency vocabulary takes advantage of multiple audio features by computing co-occurrence of them across different feature spaces. By enforcing the tight coherency constraint across feature spaces, coherency vocabulary makes the BoA model more discriminative and robust to various audio transforms. 2D Hough transform is then applied to aggregate scores from matched audio segments. The segements fall into the peak bin is identified as the copy segments in reference video. In addition, we also accomplish video copy detection from both audio and visual cues by performing four late fusion strategies to demonstrate complementarity of audio and visual information in video copy detection. Intensive experiments are conducted on the large-scale dataset of TRECVID 2009 and competitve results are achieved.", "title": "" }, { "docid": "4d3468bb14b7ad933baac5c50feec496", "text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.", "title": "" }, { "docid": "e1222c5c6c4134b9b90ff2feea6efae2", "text": "Character recognition is one of the challenging tasks of pattern recognition and machine learning arena. Though a level of saturation has been obtained in machine printed character recognition, there still remains a void while recognizing handwritten scripts. We, in this paper, have summarized all the existing research efforts on the recognition of printed as well as handwritten Odia alphanumeric characters. Odia is a classical and popular language in the Indian subcontinent used by more than 50 million people. In spite of its rich history, popularity and usefulness, not much research efforts have been made to achieve human level accuracy in case of Odia OCR. This review is expected to serve a benchmark reference for research on Odia character recognition and inspire OCR research communities to make tangible impact on its growth. Here several preprocessing methodologies, segmentation approaches, feature extraction techniques and classifier models with their respective accuracies so far reported are critically reviewed, evaluated and compared. The shortcomings and deficiencies in the current state-of-the-art are discussed in detail for each stage of character recognition. A new handwritten alphanumeric character database for Odia is created and reported in this paper in order to address the paucity of benchmark Odia database. From the existing research work, future research paradigms on Odia character recognition are suggested. We hope that such a comprehensive survey on Odia character recognition will serve its purpose of being a solid reference and help creating high accuracy Odia character recognition systems.", "title": "" }, { "docid": "a3ebadf449537b5df8de3c5ab96c74cb", "text": "Do conglomerate firms have the ability to allocate resources efficiently across business segments? We address this question by comparing the performance of firms that follow passive benchmark strategies in their capital allocation process to those that actively deviate from those benchmarks. Using three measures of capital allocation style to capture various aspects of activeness, we show that active firms have a lower average industry-adjusted profitability than passive firms. This result is robust to controlling for potential endogeneity using matching analysis and regression analysis with firm fixed effects. Moreover, active firms obtain lower valuation and lower excess stock returns in subsequent periods. Our findings suggest that, on average, conglomerate firms that actively allocate resources across their business segments do not do so efficiently and that the stock market does not fully incorporate information revealed in the internal capital allocation process. Guedj and Huang are from the McCombs School of Business, University of Texas at Austin. Guedj: [email protected] and (512) 471-5781. Huang: [email protected] and (512) 232-9375. Sulaeman is from the Cox School of Business, Southern Methodist University, [email protected] and (214) 768-8284. The authors thank Alexander Butler, Amar Gande, Mark Leary, Darius Miller, Maureen O’Hara, Owen Lamont, Gordon Phillips, Mike Roberts, Oleg Rytchkov, Gideon Saar, Zacharias Sautner, Clemens Sialm, Rex Thompson, Sheridan Titman, Yuhai Xuan, participants at the Financial Research Association meeting and seminars at Cornell University, Southern Methodist University, the University of Texas at Austin, and the University of Texas at Dallas for their helpful comments.", "title": "" }, { "docid": "ab3fb8980fa8d88e348f431da3d21ed4", "text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.", "title": "" }, { "docid": "44101197b4db055c667da3d86a820fd3", "text": "A simple approach for obstacle detection and collision avoidance of an autonomous flying quadcopter using low-cost ultrasonic sensors and simple data fusion is presented here. The approach has been implemented and tested in a self-developed quadcopter and its evaluation shows the general realizability as well as the drawbacks of this approach. In this paper, we propose a complete MICRO-UNMANNED AERIAL VEHICLE (MUAV) platform including hardware setup and processing pipeline-that is able to perceive obstacles in (almost) all directions in its surrounding. In this paper, we propose a complete micro aerial vehicle platform—including hardware setup and processing pipeline—that is able to perceive obstacles in (almost) all directions in its surrounding. Quadcopter is equipped with ultrasonic sensor. All signals from sensors are processed by Arduino microcontroller board. [5] Output from Arduino microcontroller board used to control Quadcopter propellers. [5] Keywords— Obstacle Detection, Collision Avoidance, PID controller programming, components.", "title": "" }, { "docid": "c0cec61d37c4e0fe1fa82f8c182c5fc7", "text": "PURPOSE OF REVIEW\nCompassion has been recognized as a key aspect of high-quality healthcare, particularly in palliative care. This article provides a general review of the current understanding of compassion in palliative care and summarizes emergent compassionate initiatives in palliative care at three interdependent levels: compassion for patients, compassion in healthcare professionals, and compassionate communities at the end of life.\n\n\nRECENT FINDINGS\nCompassion is a constructive response to suffering that enhances treatment outcomes, fosters the dignity of the recipient, and provides self-care for the giver. Patients and healthcare professionals value compassion and perceive a general lack of compassion in healthcare systems. Compassion for patients and for professionals' self-care can be trained and implemented top-down (institutional policies) and bottom-up (compassion training). 'Compassionate communities' is an important emerging movement that complements regular healthcare and social services with a community-level approach to offer compassionate care for people at the end of life.\n\n\nSUMMARY\nCompassion can be enhanced through diverse methodologies at the organizational, professional, and community levels. This enhancement of compassion has the potential to improve quality of palliative care treatments, enhance healthcare providers' satisfaction, and reduce healthcare costs.", "title": "" }, { "docid": "07b2355844efc85862fb5b8122be6edf", "text": "As with other types of evidence, the courts make no presumption that digital evidence is reliable without some evidence of empirical testing in relation to the theories and techniques associated with its production. The issue of reliability means that courts pay close attention to the manner in which electronic evidence has been obtained and in particular the process in which the data is captured and stored. Previous process models have tended to focus on one particular area of digital forensic practice, such as law enforcement, and have not incorporated a formal description. We contend that this approach has prevented the establishment of generally-accepted standards and processes that are urgently needed in the domain of digital forensics. This paper presents a generic process model as a step towards developing such a generally-accepted standard for a fundamental digital forensic activity–the acquisition of digital evidence.", "title": "" }, { "docid": "26439bd538c8f0b5d6fba3140e609aab", "text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.", "title": "" }, { "docid": "b5a349b6d805c2b5afac86bfe22050df", "text": "By setting apart the two functions of a support vector machine: separation of points by a nonlinear surface in the original space of patterns, and maximizing the distance between separating planes in a higher dimensional space, we are able to deene indeenite, possibly discontinuous, kernels, not necessarily inner product ones, that generate highly nonlin-ear separating surfaces. Maximizing the distance between the separating planes in the higher dimensional space is surrogated by support vector suppression, which is achieved by minimizing any desired norm of support vector multipliers. The norm may be one induced by the separation kernel if it happens to be positive deenite, or a Euclidean or a polyhe-dral norm. The latter norm leads to a linear program whereas the former norms lead to convex quadratic programs, all with an arbitrary separation kernel. A standard support vector machine can be recovered by using the same kernel for separation and support vector suppression. On a simple test example, all models perform equally well when a positive deenite kernel is used. When a negative deenite kernel is used, we are unable to solve the nonconvex quadratic program associated with a conventional support vector machine, while all other proposed models remain convex and easily generate a surface that separates all given points.", "title": "" }, { "docid": "71cac5680dafbc3c56dbfffa4472b67a", "text": "Three-dimensional printing has significant potential as a fabrication method in creating scaffolds for tissue engineering. The applications of 3D printing in the field of regenerative medicine and tissue engineering are limited by the variety of biomaterials that can be used in this technology. Many researchers have developed novel biomaterials and compositions to enable their use in 3D printing methods. The advantages of fabricating scaffolds using 3D printing are numerous, including the ability to create complex geometries, porosities, co-culture of multiple cells, and incorporate growth factors. In this review, recently-developed biomaterials for different tissues are discussed. Biomaterials used in 3D printing are categorized into ceramics, polymers, and composites. Due to the nature of 3D printing methods, most of the ceramics are combined with polymers to enhance their printability. Polymer-based biomaterials are 3D printed mostly using extrusion-based printing and have a broader range of applications in regenerative medicine. The goal of tissue engineering is to fabricate functional and viable organs and, to achieve this, multiple biomaterials and fabrication methods need to be researched.", "title": "" }, { "docid": "0b0e9d5bedcb24a65a9a43b6b0875860", "text": "Purpose – This paper summarizes and discusses the results from the LIVING LAB design study, a project within the 7 Framework Programme of the European Union. The aim of this project was to develop the conceptual design of the LIVING LAB Research Infrastructure that will be used to research human interaction with, and stimulate the adoption of, sustainable, smart and healthy innovations around the home. Design/methodology/approach – A LIVING LAB is a combined lab-/household system, analysing existing product-service-systems as well as technical and socioeconomic influences focused on the social needs of people, aiming at the development of integrated technical and social innovations and simultaneously promoting the conditions of sustainable development (highest resource efficiency, highest user orientation, etc.). This approach allows the development and testing of sustainable domestic technologies, while putting the user on centre stage. Findings – As this paper discusses the design study, no actual findings can be presented here but the focus is on presenting the research approach. LIVING LAB: Research and development of sustainable products and services through userdriven innovation in experimental-oriented environments 2 Originality/value – The two elements (real homes and living laboratories) of this approach are what make the LIVING LAB research infrastructure unique. The research conducted in LIVING LAB will be innovative in several respects. First, it will contribute to market innovation by producing breakthroughs in sustainable domestic technologies that will be easy to install, user friendly and that meet environmental performance standards in real life. Second, research from LIVING LAB will contribute to innovation in practice by pioneering new forms of in-context, user-centred research, including long-term and cross-cultural research.", "title": "" }, { "docid": "5e0cff7f2b8e5aa8d112eacf2f149d60", "text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.", "title": "" }, { "docid": "60bdd255a19784ed2d19550222e61b69", "text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.", "title": "" }, { "docid": "7256d6c5bebac110734275d2f985ab31", "text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.", "title": "" }, { "docid": "d91e3f1a92052d48b8033e3d9c3dd695", "text": "This work investigates the impact of syntactic features in a completely unsupervised semantic relation extraction experiment. Automated relation extraction deals with identifying semantic relation instances in a text and classifying them according to the type of relation. This task is essential in information and knowledge extraction and in knowledge base population. Supervised relation extraction systems rely on annotated examples [ , – , ] and extract di erent kinds of features from the training data, and eventually from external knowledge sources. The types of extracted relations are necessarily limited to a pre-defined list. In Open Information Extraction (OpenIE) [ , ] relation types are inferred directly from the data: concept pairs representing the same relation are grouped together and relation labels can be generated from context segments or through labeling by domain experts [ , , ]. A commonly used method [ , ] is to represent entity couples by a pair-pattern matrix, and cluster relation instances according to the similarity of their distribution over patterns. Pattern-based approaches [ , , , , ] typically use lexical context patterns, assuming that the semantic relation between two entities is explicitly mentioned in the text. Patterns can be defined manually [ ], obtained by Latent Relational Analysis [ ], or from a corpus by sequential pattern mining [ , , ]. Previous works, especially in the biomedical domain, have shown that not only lexical patterns, but also syntactic dependency trees can be beneficial in supervised and semi-supervised relation extraction [ , , – ]. Early experiments on combining lexical patterns with di erent types of distributional information in unsupervised relation clustering did not bring significant improvement [ ]. The underlying di culty is that while supervised classifiers can learn to weight attributes from di erent sources, it is not trivial to combine di erent types of features in a single clustering feature space. In our experiments, we propose to combine syntactic features with sequential lexical patterns for unsupervised clustering of semantic relation instances in the context of (NLP-related) scientific texts. We replicate the experiments of [ ] and augment them with dependency-based syntactic features. We adopt a pairpattern matrix for clustering relation instances. The task can be described as follows: if a1, a2, b1, b2 are pre-annotated domain concepts extracted from a corpus, we would like to classify concept pairs a = (a1, a2) and b = (b1, b2) in homogeneous groups according to their semantic relation. We need an e cient", "title": "" } ]
scidocsrr
c771b5e6de457ce893060e7b297d5764
Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?
[ { "docid": "ab7a69accb17ff99642ab225facec95d", "text": "It is challenging to adopt computing-intensive and parameter-rich Convolutional Neural Networks (CNNs) in mobile devices due to limited hardware resources and low power budgets. To support multiple concurrently running applications, one mobile device needs to perform multiple CNN tests simultaneously in real-time. Previous solutions cannot guarantee a high enough frame rate when serving multiple applications with reasonable hardware and power cost. In this paper, we present a novel process-in-memory architecture to process emerging binary CNN tests in Wide-IO2 DRAMs. Compared to state-of-the-art accelerators, our design improves CNN test performance by 4× ∼ 11× with small hardware and power overhead.", "title": "" }, { "docid": "0d4b9fe319c7ca3ffcd6974ccf9b2fbd", "text": "Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values. In this paper, we present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture. By utilizing a novel set of optimizations that enable efficient mapping of binarized neural networks to hardware, we implement fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. On a ZC706 embedded FPGA platform drawing less than 25 W total system power, we demonstrate up to 12.3 million image classifications per second with 0.31 μs latency on the MNIST dataset with 95.8% accuracy, and 21906 image classifications per second with 283 μs latency on the CIFAR-10 and SVHN datasets with respectively 80.1% and 94.9% accuracy. To the best of our knowledge, ours are the fastest classification rates reported to date on these benchmarks.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "395afccf9891cfcc8e14d82a6e968918", "text": "In this paper, we present an ultra-low-power smart visual sensor architecture. A 10.6-μW low-resolution contrast-based imager featuring internal analog preprocessing is coupled with an energy-efficient quad-core cluster processor that exploits near-threshold computing within a few milliwatt power envelope. We demonstrate the capability of the smart camera on a moving object detection framework. The computational load is distributed among mixed-signal pixel and digital parallel processing. Such local processing reduces the amount of digital data to be sent out of the node by 91%. Exploiting context aware analog circuits, the imager only dispatches meaningful postprocessed data to the processing unit, lowering the sensor-to-processor bandwidth by 31× with respect to transmitting a full pixel frame. To extract high-level features, an event-driven approach is applied to the sensor data and optimized for parallel runtime execution. A 57.7× system energy saving is reached through the event-driven approach with respect to frame-based processing, on a low-power MCU node. The near-threshold parallel processor further reduces the processing energy cost by 6.64×, achieving an overall system energy cost of 1.79 μJ per frame, which results to be 21.8× and up to 383× lower than, respectively, an event-based imaging system based on an asynchronous visual sensor and a traditional frame-based smart visual sensor.", "title": "" } ]
[ { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "0b58503e8b2ccc606cb1b45f542ba97a", "text": "Fingerprint images generally either contain only a single fingerprint or a set of non-overlapped fingerprints (e.g., slap fingerprints). However, there are situations where more than one fingerprint overlap on each other. Such situations are frequently encountered when latent fingerprints are lifted from crime scenes or residue fingerprints are left on fingerprint sensors. Overlapped fingerprints constitute a serious challenge to existing fingerprint recognition techniques, since these techniques are designed under the assumption that fingerprints have been properly segmented. In this paper, a novel algorithm is proposed to separate overlapped fingerprints into component or individual fingerprints. We first use local Fourier transform to estimate an initial overlapped orientation field, which contains at most two candidate orientations at each location. Then relaxation labeling technique is employed to label each candidate orientation as one of two classes. Based on the labeling result, we separate the initial overlapped orientation field into two orientation fields. Finally, the two fingerprints are obtained by enhancing the overlapped fingerprint using Gabor filters tuned to these two component separated orientation fields, respectively. Experimental results indicate that the algorithm leads to a good separation of overlapped fingerprints.", "title": "" }, { "docid": "6021388395ddd784422a22d30dac8797", "text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.", "title": "" }, { "docid": "76547fb01f5d9ede8731a3c22a69ec87", "text": "This paper explores the use monads to structure functionalprograms. No prior knowledge of monads or category theory isrequired.\nMonads increase the ease with which programs may be modified.They can mimic the effect of impure features such as exceptions,state, and continuations; and also provide effects not easilyachieved with such features. The types of a program reflect whicheffects occur.\nThe first section is an extended example of the use of monads. Asimple interpreter is modified to support various extra features:error messages, state, output, and non-deterministic choice. Thesecond section describes the relation between monads and thecontinuation-passing style. The third section sketches how monadsare used in a compiler for Haskell that is written in Haskell.", "title": "" }, { "docid": "60161ef0c46b4477f0cf35356bc3602c", "text": "Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those * Alexandra Wood is a Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Micah Altman is Director of Research at MIT Libraries. Aaron Bembenek is a PhD student in computer science at Harvard University. Mark Bun is a Google Research Fellow at the Simons Institute for the Theory of Computing. Marco Gaboardi is an Assistant Professor in the Computer Science and Engineering department at the State University of New York at Buffalo. James Honaker is a Research Associate at the Center for Research on Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Sciences. Kobbi Nissim is a McDevitt Chair in Computer Science at Georgetown University and an Affiliate Professor at Georgetown University Law Center; work towards this document was completed in part while the Author was visiting the Center for Research on Computation and Society at Harvard University. David R. O’Brien is a Senior Researcher at the Berkman Klein Center for Internet & Society at Harvard University. Thomas Steinke is a Research Staff Member at IBM Research – Almaden. Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at Harvard University. This Article is the product of a working group of the Privacy Tools for Sharing Research Data project at Harvard University (http://privacytools.seas.harvard.edu). The working group discussions were led by Kobbi Nissim. Alexandra Wood and Kobbi Nissim are the lead Authors of this Article. Working group members Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, Salil Vadhan, and Alexandra Wood contributed to the conception of the Article and to the writing. The Authors thank John Abowd, Scott Bradner, Cynthia Dwork, Simson Garfinkel, Caper Gooden, Deborah Hurley, Rachel Kalmar, Georgios Kellaris, Daniel Muise, Michel Reymond, and Michael Washington for their many valuable comments on earlier versions of this Article. A preliminary version of this work was presented at the 9th Annual Privacy Law Scholars Conference (PLSC 2017), and the Authors thank the participants for contributing thoughtful feedback. The original manuscript was based upon work supported by the National Science Foundation under Grant No. CNS-1237235, as well as by the Alfred P. Sloan Foundation. The Authors’ subsequent revisions to the manuscript were supported, in part, by the US Census Bureau under cooperative agreement no. CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the Authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the US Census Bureau. 210 VAND. J. ENT. & TECH. L. [Vol. 21:1:209 currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.", "title": "" }, { "docid": "7981acb4e72343a960803761929f4179", "text": "DIBCO 2017 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2017 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements. This paper describes the competition details including the evaluation measures used as well as the performance of the 26 submitted methods along with a brief description of each method.", "title": "" }, { "docid": "f8daf84baa19c438d22a5274f0393f08", "text": "We describe and evaluate methods for learning to forecast forthcoming events of interest from a corpus containing 22 years of news stories. We consider the examples of identifying significant increases in the likelihood of disease outbreaks, deaths, and riots in advance of the occurrence of these events in the world. We provide details of methods and studies, including the automated extraction and generalization of sequences of events from news corpora and multiple web resources. We evaluate the predictive power of the approach on real-world events withheld from the system.", "title": "" }, { "docid": "799c839fad857c1ba90a9905f1b1d544", "text": "Much of the research published in the property discipline consists of work utilising quantitative methods. While research gained using quantitative methods, if appropriately designed and rigorous, leads to results which are typically generalisable and quantifiable, it does not allow for a rich and in-depth understanding of a phenomenon. This is especially so if a researcher’s aim is to uncover the issues or factors underlying that phenomenon. Such an aim would require using a qualitative research methodology, and possibly an interpretive as opposed to a positivist theoretical perspective. The purpose of this paper is to provide a general overview of qualitative methodologies with the aim of encouraging a broadening of methodological approaches to overcome the positivist methodological bias which has the potential of inhibiting property behavioural research.", "title": "" }, { "docid": "e0d040efd131db568d875b80c6adc111", "text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: [email protected] Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from", "title": "" }, { "docid": "4ea8351c57e4581bfdab4c7cd357c90a", "text": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.", "title": "" }, { "docid": "919f42363fed69dc38eba0c46be23612", "text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.", "title": "" }, { "docid": "9694bc859dd5295c40d36230cf6fd1b9", "text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".", "title": "" }, { "docid": "d2ccb98fab55a9870a7018df3817337c", "text": "This paper focuses on the design, modelling and hovering control of a tail-sitter with single thrust-vectored propeller which possesses the inherent advantages of both fixed wing and rotary wing unmanned aerial vehicles (UAVs). The developed tail-sitter requires only the same number of actuators as a normal fixed wing aircraft and achieves attitude control through deflections of the thrust-vectored propeller and ailerons. Thrust vectoring is realized by mounting a simple gimbal mechanism beneath the propeller motor. Both the thrust vector model and aerodynamics model are established, which leads to a complete nonlinear model of the tail-sitter in hovering state. Quaternion is applied for attitude description to avoid the singularity problem and improve computation efficiency. Through reasonable assumptions, a simplified model of the tail-sitter is obtained, based on which a backstepping controller is designed using the Lyapunov stability theory. Experimental results are presented to demonstrate the effectiveness of the proposed control scheme.", "title": "" }, { "docid": "faf4eeaaf3e8516ac65543c0bc5e50d6", "text": "Service Oriented Architecture facilitates more feature as compared to legacy architecture which makes this architecture widely accepted by the industry. Service oriented architecture provides feature like reusability, composability, distributed deployment. Service of SOA is governed by SOA governance board in which they provide approval to create the services and also provide space to expose the particular services. Sometime many services are kept in a repository which creates service identification issue. Service identification is one of the most critical aspects in service oriented architecture. The services must be defined or identified keeping reuse and usage in different business contexts in mind. Rigorous review of Identified service should be done prior to development of the services. Identification of the authenticated service is challenging to development teams due to several reasons such as lack of business process documentation, lack of expert analyst, and lack of business executive involvement, lack of reuse of services, lack of right decision to choose the appropriate service. In some of the cases we have replica of same service exist, which creates difficulties in service identification. Existing design approaches of SOA doesn't take full advantage whereas proposed model is compatible more advantageous and increase the performance of the services. This paper proposes a model which will help in clustering the service repository based on service functionality. Service identification will be easy if we follow distributed repository based on functionality for our services. Generally in case of web services where service response time should be minimal, searching in whole repository delays response time. The proposed model will reduce the response time of the services and will also helpful in identifying the correct services within the specified time.", "title": "" }, { "docid": "49df721b5115ad7d3f91b6212dbb585e", "text": "We first present a minimal feature set for transition-based dependency parsing, continuing a recent trend started by Kiperwasser and Goldberg (2016a) and Cross and Huang (2016a) of using bi-directional LSTM features. We plug our minimal feature set into the dynamic-programming framework of Huang and Sagae (2010) and Kuhlmann et al. (2011) to produce the first implementation of worst-case Opn3q exact decoders for arc-hybrid and arceager transition systems. With our minimal features, we also present Opn3q global training methods. Finally, using ensembles including our new parsers, we achieve the best unlabeled attachment score reported (to our knowledge) on the Chinese Treebank and the “second-best-in-class” result on the English Penn Treebank. Publication venue: EMNLP 2017", "title": "" }, { "docid": "39828596907746de12a31885c6ce7643", "text": "Hypervelocity (~1000-km/s) impact of a macroscopic particle (macron) has profound influences in high energy density physics and inertial fusion energy researches. As the charge-mass ratio of macrons is too low, the length of an electrostatic accelerator can reach hundreds to thousands of kilometers, rendering macron acceleration impractical. To reduce the accelerator length, a much higher electric field than what the most powerful klystrons can provide is desired. One practical choice may be the high-intensity charged particle beam ldquoblowing-piperdquo approach. In this approach, a high-intensity (~10-kA) medium-energy (0.5-2-MeV) long-pulse (10-1000-mus) positively charged ion beam shots to a heavily charged millimeter-size macron to create a local high-strength electric field (~1010 V/m), accelerating the macron efficiently. We will discuss the physics and challenges involved in this concept and give an illustrative simulation.", "title": "" }, { "docid": "67fdad898361edd4cf63b525b8af8b48", "text": "Traffic data is a fundamental component for applications and researches in transportation systems. However, real traffic data collected from loop detectors or other channels often include missing data which affects the relative applications and researches. This paper proposes an approach based on deep learning to impute the missing traffic data. The proposed approach treats the traffic data including observed data and missing data as a whole data item and restores the complete data with the deep structural network. The deep learning approach can discover the correlations contained in the data structure by a layer-wise pre-training and improve the imputation accuracy by conducting a fine-tuning afterwards. We analyze the imputation patterns that can be realized with the proposed approach and conduct a series of experiments. The results show that the proposed approach can keep a stable error under different traffic data missing rate. Deep learning is promising in the field of traffic data imputation.", "title": "" }, { "docid": "e82681b5140f3a9b283bbd02870f18d5", "text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization", "title": "" }, { "docid": "c9ff6e6c47b6362aaba5f827dd1b48f2", "text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.", "title": "" }, { "docid": "9e10e151b9e032e79296b35d09d45bbf", "text": "PURPOSE\nAutomated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as \"U-net.\"\n\n\nMATERIALS AND METHODS\nWe used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations.\n\n\nRESULTS\nThe average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016).\n\n\nCONCLUSIONS\nIn conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.", "title": "" } ]
scidocsrr
b7200ac2ba1ec8aee7baf9428f1837ce
Poster Abstract: A 6LoWPAN Model for OMNeT++
[ { "docid": "64dc61e853f41654dba602c7362546b5", "text": "This paper introduces our work on the communication stack of wireless sensor networks. We present the IPv6 approach for wireless sensor networks called 6LoWPAN in its IETF charter. We then compare the different implementations of 6LoWPAN subsets for several sensor nodes platforms. We present our approach for the 6LoWPAN implementation which aims to preserve the advantages of modularity while keeping a small memory footprint and a good efficiency.", "title": "" }, { "docid": "a231d6254a136a40625728d7e14d7844", "text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.", "title": "" } ]
[ { "docid": "9af22f6a1bbb4cbb13508b654e5fd7a5", "text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.", "title": "" }, { "docid": "c252cca4122984aac411a01ce28777f7", "text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.", "title": "" }, { "docid": "2f5c25f08f360381ea3d46c8d66694f7", "text": "Router syslogs are messages that a router logs to describe a wide range of events observed by it. They are considered one of the most valuable data sources for monitoring network health and for trou- bleshooting network faults and performance anomalies. However, router syslog messages are essentially free-form text with only a minimal structure, and their formats vary among different vendors and router OSes. Furthermore, since router syslogs are aimed for tracking and debugging router software/hardware problems, they are often too low-level from network service management perspectives. Due to their sheer volume (e.g., millions per day in a large ISP network), detailed router syslog messages are typically examined only when required by an on-going troubleshooting investigation or when given a narrow time range and a specific router under suspicion. Automated systems based on router syslogs on the other hand tend to focus on a subset of the mission critical messages (e.g., relating to network fault) to avoid dealing with the full diversity and complexity of syslog messages. In this project, we design a Sys-logDigest system that can automatically transform and compress such low-level minimally-structured syslog messages into meaningful and prioritized high-level network events, using powerful data mining techniques tailored to our problem domain. These events are three orders of magnitude fewer in number and have much better usability than raw syslog messages. We demonstrate that they provide critical input to network troubleshooting, and net- work health monitoring and visualization.", "title": "" }, { "docid": "7866c0cdaa038f08112e629580c445cb", "text": "Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with humanmachine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training.", "title": "" }, { "docid": "105fe384f9dfb13aef82f4ff16f87821", "text": "Dengue hemorrhagic fever (DHF), a severe manifestation of dengue viral infection that can cause severe bleeding, organ impairment, and even death, affects between 15,000 and 105,000 people each year in Thailand. While all Thai provinces experience at least one DHF case most years, the distribution of cases shifts regionally from year to year. Accurately forecasting where DHF outbreaks occur before the dengue season could help public health officials prioritize public health activities. We develop statistical models that use biologically plausible covariates, observed by April each year, to forecast the cumulative DHF incidence for the remainder of the year. We perform cross-validation during the training phase (2000-2009) to select the covariates for these models. A parsimonious model based on preseason incidence outperforms the 10-y median for 65% of province-level annual forecasts, reduces the mean absolute error by 19%, and successfully forecasts outbreaks (area under the receiver operating characteristic curve = 0.84) over the testing period (2010-2014). We find that functions of past incidence contribute most strongly to model performance, whereas the importance of environmental covariates varies regionally. This work illustrates that accurate forecasts of dengue risk are possible in a policy-relevant timeframe.", "title": "" }, { "docid": "791cc656afc2d36e1f491c5a80b77b97", "text": "With the wide diffusion of smartphones and their usage in a plethora of processes and activities, these devices have been handling an increasing variety of sensitive resources. Attackers are hence producing a large number of malware applications for Android (the most spread mobile platform), often by slightly modifying existing applications, which results in malware being organized in families. Some works in the literature showed that opcodes are informative for detecting malware, not only in the Android platform. In this paper, we investigate if frequencies of ngrams of opcodes are effective in detecting Android malware and if there is some significant malware family for which they are more or less effective. To this end, we designed a method based on state-of-the-art classifiers applied to frequencies of opcodes ngrams. Then, we experimentally evaluated it on a recent dataset composed of 11120 applications, 5560 of which are malware belonging to several different families. Results show that an accuracy of 97% can be obtained on the average, whereas perfect detection rate is achieved for more than one malware family.", "title": "" }, { "docid": "4b68d3c94ef785f80eac9c4c6ca28cfe", "text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.", "title": "" }, { "docid": "242a2f64fc103af641320c1efe338412", "text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.", "title": "" }, { "docid": "7e08a713a97f153cdd3a7728b7e0a37c", "text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.", "title": "" }, { "docid": "cbb5856d08a9f8a99b2b6a48ad6fc573", "text": "Programmable Logic Controller (PLC) technology plays an important role in the automation architectures of several critical infrastructures such as Industrial Control Systems (ICS), controlling equipment in contexts such as chemical processes, factory lines, power production plants or power distribution grids, just to mention a few examples. Despite their importance, PLCs constitute one of the weakest links in ICS security, frequently due to reasons such as the absence of secure communication mechanisms, authenticated access or system integrity checks. While events such as the Stuxnet worm have raised awareness for this problem, industry has slowly reacted, either due to reliability or cost concerns. This paper introduces the Shadow Security Unit, a low-cost device deployed in parallel with a PLC or Remote Terminal Unit (RTU), being capable of transparently intercepting its communications control channels and physical process I/O lines to continuously assess its security and operational status. The proposed device does not require significant changes to the existing control network, being able to work in standalone or integrated within an ICS protection framework.", "title": "" }, { "docid": "2b1048b3bdb52c006437b18d7b458871", "text": "A road interpretation module is presented! which is part of a real-time vehicle guidance system for autonomous driving. Based on bifocal computer vision, the complete system is able to drive a vehicle on marked or unmarked roads, to detect obstacles, and to react appropriately. The hardware is a network of 23 transputers, organized in modular clusters. Parallel modules performing image analysis, feature extraction, object modelling, sensor data integration and vehicle control, are organized in hierarchical levels. The road interpretation module is based on the principle of recursive state estimation by Kalman filter techniques. Internal 4-D models of the road, vehicle position, and orientation are updated using data produced by the image-processing module. The system has been implemented on two vehicles (VITA and VaMoRs) and demonstrated in the framework of PROMETHEUS, where the ability of autonomous driving through narrow curves and of lane changing were demonstrated. Meanwhile, the system has been tested on public roads in real traffic situations, including travel on a German Autobahn autonomously at speeds up to 85 km/h. Belcastro, C.M., Fischl, R., and M. Kam. “Fusion Techniques Using Distributed Kalman Filtering for Detecting Changes in Systems.” Proceedings of the 1991 American Control Conference. 26-28 June 1991: Boston, MA. American Autom. Control Council, 1991. Vol. 3: (2296-2298).", "title": "" }, { "docid": "a33f862d0b7dfde7b9f18aa193db9acf", "text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor  [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).", "title": "" }, { "docid": "6cd301f1b6ffe64f95b7d63eb0356a87", "text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.", "title": "" }, { "docid": "00f31f21742a843ce6c4a00f3f6e6259", "text": "Recent developments in digital technologies bring about considerable business opportunities but also impose significant challenges on firms in all industries. While some industries, e.g., newspapers, have already profoundly reorganized the mechanisms of value creation, delivery, and capture during the course of digitalization (Karimi & Walter, 2015, 2016), many process-oriented and asset intensive industries have not yet fully evaluated and exploited the potential applications (Rigby, 2014). Although the process industries have successfully used advancements in technologies to optimize processes in the past (Kim et al., 2011), digitalization poses an unprecedented shift in technology that exceeds conventional technological evolution (Svahn et al., 2017). Driven by augmented processing power, connectivity of devices (IoT), advanced data analytics, and sensor technology, innovation activities in the process industries now break away from established innovation paths (Svahn et al., 2017; Tripsas, 2009). In contrast to prior innovations that were primarily bound to physical devices, new products are increasingly embedded into systems of value creation that span the physical and digital world (Parmar et al., 2014; Rigby, 2014; Yoo et al., 2010a). On this new playing field, firms and researchers are jointly interested in the organizational characteristics and capabilities that are required to gain a competitive advantage (e.g. Fink, 2011). Whereas prior studies cover the effect of digital transformation on innovation in various industries like newspaper (Karimi and Walter, 2015, 2016), automotive (Henfridsson and Yoo, 2014; Svahn et al., 2017), photography (Tripsas, 2009), and manufacturing (Jonsson et al., 2008), there is a relative dearth of studies that cover the impact of digital transformation in the process industries (Westergren and Holmström, 2012). The process industries are characterized by asset and research intensity, strong integration into physical locations, and often include value chains that are complex and feature aspects of rigidity (Lager Research Paper Digitalization in the process industries – Evidence from the German water industry", "title": "" }, { "docid": "6e76496dbe78bd7ffa9359a41dc91e69", "text": "US Supreme Court rulings concerning sanctions for juvenile offenders have drawn on the science of brain development and concluded that adolescents are inherently less mature than adults in ways that render them less culpable. This conclusion departs from arguments made in cases involving the mature minor doctrine, in which teenagers have been portrayed as comparable to adults in their capacity to make medical decisions. I attempt to reconcile these apparently incompatible views of adolescents' decision-making competence. Adolescents are indeed less mature than adults when making decisions under conditions that are characterized by emotional arousal and peer pressure, but adolescents aged 15 and older are just as mature as adults when emotional arousal is minimized and when they are not under the influence of peers, conditions that typically characterize medical decision-making. The mature minor doctrine, as applied to individuals 15 and older, is thus consistent with recent research on adolescent development.", "title": "" }, { "docid": "6cbcd5288423895c4aeff8524ca5ac6c", "text": "We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for assessing the precision and recall of an automatically acquired generative grammar without recourse to human judgment. The present work sets the stage for the eventual development of more powerful unsupervised algorithms for language acquisition, which would make use of the coordination structures present in natural child-directed speech.", "title": "" }, { "docid": "12e2d86add1918393291ea55f99a44a0", "text": "Supervised classification algorithms aim at producing a learning model from a labeled training set. Various successful techniques have been proposed to solve the problem in the binary classification case. The multiclass classification case is more delicate, as many of the algorithms were introduced basically to solve binary classification problems. In this short survey we investigate the various techniques for solving the multiclass classification problem.", "title": "" }, { "docid": "910678cdd552fe5d0d2c288784ca550f", "text": "Livestock production today has become a very complex process since several requirements have to be combined such as: food safety, animal welfare, animal health, environmental impact and sustainability in a wider sense. The consequence is a growing need to balance many of these variables during the production process. In the past farmers were monitoring their animals in their daily work by normal audio-visual observation like ethologists still do in their research. Today however the number of animals per farm has increased so much that this has become impossible. Another problem is that visual observation never can be done continuously during 24 hours a day. One of the objectives of Precision Livestock Farming (PLF) is to develop the technology and the tools for the on-line monitoring of farm animals and this continuously during their life and in a fully automatic way. This technology will never replace the farmer but can support him as a tool that automatically and continuously delivers him quantitative information about the status of his animals. Like other living organisms farm animals are responding to their environment with several behavioural and physiological variables. Many sensors and sensing techniques are under development to measure such behavioural and biological responses of farm animals. This can be done by new sensors or by sound analysis, image analysis etc. A major problem to monitor animals is the fact that animals themselves are complex systems that are individually different and that are so called time varying dynamic systems since their behaviour and health status can change at any time. Another challenge for PLF is to develop reliable monitoring tools for such Complex Individual Time varying Dynamic systems (“CITD” systems). In this paper we will talk about what is PLF and what is the importance of PLF. Next we will explain the basic principles. Further we will show examples of monitoring tools by PLF such as on-line monitor for health status by analysing continuously the sound produced by pigs. Another example shows the on-line automatic identification of the behaviour of individual laying hens by continuous analysis of 2D images from a top view camera. Next we will demonstrate the potential of PLF for more efficient controlling of biological processes. Finally we will discuss how implementation might be realised and what risk and problems are. The technology that is already available and that is under development today can be used for efficient and continuous monitoring if an engineering approach is combined with the expertise of ethologists, physiologist, veterinarians who are familiar with the animal as a living organism.", "title": "" }, { "docid": "c7a8cd22ef67abcdeed13b86825e4d7e", "text": "Recent advancements in computer vision, multimedia and Internet of Things (IoT) have shown that human detection methods are useful for applications of intelligent transportation system in smart environment. However, detection of a human in real world remains a challenging problem. Histogram of oriented gradients (HOG) based human detection gives an emphasis towards finding an effective solution to the problems of significant changes in view point, fixed resolution human but are expensive to compute. The proposed algorithm aims to reduce the computations using approximation methods and adapts for varying scale. The features are modeled at different scales for training the classifier. Experiments have been conducted on human datasets to demonstrate the superior performance of the proposed approach in human detection and discussions are made to integrate and increase personalization for building smart environment using IoT.", "title": "" } ]
scidocsrr
e12f1dea29965bfcd5908d69671d7e49
Access Control Models for Virtual Object Communication in Cloud-Enabled IoT
[ { "docid": "c2571afd6f2b9e9856c8f8c4eeb60b81", "text": "In the Internet of Things, services can be provisioned using centralized architectures, where central entities acquire, process, and provide information. Alternatively, distributed architectures, where entities at the edge of the network exchange information and collaborate with each other in a dynamic way, can also be used. In order to understand the applicability and viability of this distributed approach, it is necessary to know its advantages and disadvantages – not only in terms of features but also in terms of security and privacy challenges. The purpose of this paper is to show that the distributed approach has various challenges that need to be solved, but also various interesting properties and strengths.", "title": "" }, { "docid": "a08fe0c015f5fc02b7654f3fd00fb599", "text": "Recently, there has been considerable interest in attribute based access control (ABAC) to overcome the limitations of the dominant access control models (i.e, discretionary-DAC, mandatory-MAC and role based-RBAC) while unifying their advantages. Although some proposals for ABAC have been published, and even implemented and standardized, there is no consensus on precisely what is meant by ABAC or the required features of ABAC. There is no widely accepted ABAC model as there are for DAC, MAC and RBAC. This paper takes a step towards this end by constructing an ABAC model that has “just sufficient” features to be “easily and naturally” configured to do DAC, MAC and RBAC. For this purpose we understand DAC to mean owner-controlled access control lists, MAC to mean lattice-based access control with tranquility and RBAC to mean flat and hierarchical RBAC. Our central contribution is to take a first cut at establishing formal connections between the three successful classical models and desired ABAC models.", "title": "" } ]
[ { "docid": "8eb96ae8116a16e24e6a3b60190cc632", "text": "IT professionals are finding that more of their IT investments are being measured against a knowledge management (KM) metric. Those who want to deploy foundation technologies such as groupware, CRM or decision support tools, but fail to justify them on the basis of their contribution to KM, may find it difficult to get funding unless they can frame them within the KM context. Determining KM's pervasiveness and impact is analogous to measuring the contribution of marketing, employee development, or any other management or organizational competency. This paper addresses the problem of developing measurement models for KM metrics and discusses what current KM metrics are in use, and examine their sustainability and soundness in assessing knowledge utilization and retention of generating revenue. The paper will then discuss the use of a Balanced Scorecard approach to determine a business-oriented relationship between strategic KM usage and IT strategy and implementation.", "title": "" }, { "docid": "8a6e062d17ee175e00288dd875603a9c", "text": "Code summarization, aiming to generate succinct natural language description of source code, is extremely useful for code search and code comprehension. It has played an important role in software maintenance and evolution. Previous approaches generate summaries by retrieving summaries from similar code snippets. However, these approaches heavily rely on whether similar code snippets can be retrieved, how similar the snippets are, and fail to capture the API knowledge in the source code, which carries vital information about the functionality of the source code. In this paper, we propose a novel approach, named TL-CodeSum, which successfully uses API knowledge learned in a different but related task to code summarization. Experiments on large-scale real-world industry Java projects indicate that our approach is effective and outperforms the state-of-the-art in code summarization.", "title": "" }, { "docid": "398c791338adf824a81a2bfb8f35c6bb", "text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.", "title": "" }, { "docid": "acddf623a4db29f60351f41eb8d0b113", "text": "In an age where people are becoming increasing likely to trust information found through online media, journalists have begun employing techniques to lure readers to articles by using catchy headlines, called clickbait. These headlines entice the user into clicking through the article whilst not providing information relevant to the headline itself. Previous methods of detecting clickbait have explored techniques heavily dependent on feature engineering, with little experimentation having been tried with neural network architectures. We introduce a novel model combining recurrent neural networks, attention layers and image embeddings. Our model uses a combination of distributed word embeddings derived from unannotated corpora, character level embeddings calculated through Convolutional Neural Networks. These representations are passed through a bidirectional LSTM with an attention layer. The image embeddings are also learnt from large data using CNNs. Experimental results show that our model achieves an F1 score of 65.37% beating the previous benchmark of 55.21%.", "title": "" }, { "docid": "fc875b50a03dcae5cbde23fa7f9b16bf", "text": "Although considerable research has shown the importance of social connection for physical health, little is known about the higher-level neurocognitive processes that link experiences of social connection or disconnection with health-relevant physiological responses. Here we review the key physiological systems implicated in the link between social ties and health and the neural mechanisms that may translate social experiences into downstream health-relevant physiological responses. Specifically, we suggest that threats to social connection may tap into the same neural and physiological 'alarm system' that responds to other critical survival threats, such as the threat or experience of physical harm. Similarly, experiences of social connection may tap into basic reward-related mechanisms that have inhibitory relationships with threat-related responding. Indeed, the neurocognitive correlates of social disconnection and connection may be important mediators for understanding the relationships between social ties and health.", "title": "" }, { "docid": "3f5097b33aab695678caca712b649a8f", "text": "I quantitatively measure the nature of the media’s interactions with the stock market using daily content from a popular Wall Street Journal column. I find that high media pessimism predicts downward pressure on market prices followed by a reversion to fundamentals, and unusually high or low pessimism predicts high market trading volume. These results and others are consistent with theoretical models of noise and liquidity traders. However, the evidence is inconsistent with theories of media content as a proxy for new information about fundamental asset values, as a proxy for market volatility, or as a sideshow with no relationship to asset markets. ∗Tetlock is at the McCombs School of Business, University of Texas at Austin. I am indebted to Robert Stambaugh (the editor), an anonymous associate editor and an anonymous referee for their suggestions. I am grateful to Aydogan Alti, John Campbell, Lorenzo Garlappi, Xavier Gabaix, Matthew Gentzkow, John Griffin, Seema Jayachandran, David Laibson, Terry Murray, Alvin Roth, Laura Starks, Jeremy Stein, Philip Tetlock, Sheridan Titman and Roberto Wessels for their comments. I thank Philip Stone for providing the General Inquirer software and Nathan Tefft for his technical expertise. I appreciate Robert O’Brien’s help in providing information about the Wall Street Journal. I also acknowledge the National Science Foundation, Harvard University and the University of Texas at Austin for their financial support. All mistakes in this article are my own.", "title": "" }, { "docid": "d6602271d7024f7d894b14da52299ccc", "text": "BACKGROUND\nMost articles on face composite tissue allotransplantation have considered ethical and immunologic aspects. Few have dealt with the technical aspects of graft procurement. The authors report the technical difficulties involved in procuring a lower face graft for allotransplantation.\n\n\nMETHODS\nAfter a preclinical study of 20 fresh cadavers, the authors carried out an allotransplantation of the lower two-thirds of the face on a patient in January of 2007. The graft included all the perioral muscles, the facial nerves (VII, V2, and V3) and, for the first time, the parotid glands.\n\n\nRESULTS\nThe preclinical study and clinical results confirm that complete revascularization of a graft consisting of the lower two-thirds of the face is possible from a single facial pedicle. All dissections were completed within 3 hours. Graft procurement for the clinical study took 4 hours. The authors harvested the soft tissues of the face en bloc to save time and to prevent tissue injury. They restored the donor's face within approximately 4 hours, using a resin mask colored to resemble the donor's skin tone. All nerves were easily reattached. Voluntary activity was detected on clinical examination 5 months postoperatively, and electromyography confirmed nerve regrowth, with activity predominantly on the left side. The patient requested local anesthesia for biopsies performed in month 4.\n\n\nCONCLUSIONS\nPartial facial composite tissue allotransplantation of the lower two-thirds of the face is technically feasible, with a good cosmetic and functional outcome in selected clinical cases. Flaps of this type establish vascular and neurologic connections in a reliable manner and can be procured with a rapid, standardized procedure.", "title": "" }, { "docid": "bba6fad7d1d32683e95e475632c9a9e5", "text": "A great variety of text tasks such as topic or spam identification, user profiling, and sentiment analysis can be posed as a supervised learning problem and tackle using a text classifier. A text classifier consists of several subprocesses, some of them are general enough to be applied to any supervised learning problem, whereas others are specifically designed to tackle a particular task, using complex and computational expensive processes such as lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we propose a minimalistic and wide system able to tackle text classification tasks independent of domain and language, namely μTC. It is composed by some easy to implement text transformations, text representations, and a supervised learning algorithm. These pieces produce a competitive classifier even in the domain of informally written text. We provide a detailed description of μTC along with an extensive experimental comparison with relevant state-of-the-art methods. μTC was compared on 30 different datasets. Regarding accuracy, μTC obtained the best performance in 20 datasets while achieves competitive results in the remaining 10. The compared datasets include several problems like topic and polarity classification, spam detection, user profiling and authorship attribution. Furthermore, it is important to state that our approach allows the usage of the technology even without knowledge of machine learning and natural language processing. ∗CONACyT Consejo Nacional de Ciencia y Tecnoloǵıa, Dirección de Cátedras, Insurgentes Sur 1582, Crédito Constructor 03940, Ciudad de México, México. †INFOTEC Centro de Investigación e Innovación en Tecnoloǵıas de la Información y Comunicación, Circuito Tecnopolo Sur No 112, Fracc. Tecnopolo Pocitos II, Aguascalientes 20313, México. ‡Centro de Investigación en Geograf́ıa y Geomática “Ing. Jorge L. Tamayo”, A.C. Circuito Tecnopolo Norte No. 117, Col. Tecnopolo Pocitos II, C.P. 20313,. Aguascalientes, Ags, México. 1 ar X iv :1 70 4. 01 97 5v 2 [ cs .C L ] 1 4 Se p 20 17", "title": "" }, { "docid": "07425e53be0f6314d52e3b4de4d1b601", "text": "Delay discounting was investigated in opioid-dependent and non-drug-using control participants. The latter participants were matched to the former on age, gender, education, and IQ. Participants in both groups chose between hypothetical monetary rewards available either immediately or after a delay. Delayed rewards were $1,000, and the immediate-reward amount was adjusted until choices reflected indifference. This procedure was repeated at each of 7 delays (1 week to 25 years). Opioid-dependent participants were given a second series of choices between immediate and delayed heroin, using the same procedures (i.e., the amount of delayed heroin was that which could be purchased with $1,000). Opioid-dependent participants discounted delayed monetary rewards significantly more than did non-drug-using participants. Furthermore opioid-dependent participants discounted delayed heroin significantly more than delayed money.", "title": "" }, { "docid": "7ca7ec2efe89bc031cc8aa5ce549c7f5", "text": "Conventional reverse vending machines use complex image processing technology to detect the bottles which make it more expensive. In this paper the design of a Smart Bottle Recycle Machine (SBRM) is presented. It is designed on a Field Programmable Gate Array (FPGA) using an ultrasonic range sensor which is readily available at a low cost. The sensor was used to calculate the number of bottles and distinguish between them. The main objective of this project is to build a SBRM at a cheaper production cost. This project was implemented on Altera DE2-115 board using Verilog HDL. This prototype enables the user to recycle plastic bottles and receive reward points. FPGA was chosen because hardware based implementation on a FPGA is usually much faster than the software based implementation on a microcontroller. The former is also capable of executing concurrent parallel processes at a high speed where the latter can only do a limited amount of parallel execution. So, overall FPGAs are more efficient than the microcontrollers for development of reliable and real time applications. The developed project is environment friendly and cost effective.", "title": "" }, { "docid": "61d506905286fc3297622d1ac39534f0", "text": "In this paper we present the setup of an extensive Wizard-of-Oz environment used for the data collection and the development of a dialogue system. The envisioned Perception and Interaction Assistant will act as an independent dialogue partner. Passively observing the dialogue between the two human users with respect to a limited domain, the system should take the initiative and get meaningfully involved in the communication process when required by the conversational situation. The data collection described here involves audio and video data. We aim at building a rich multi-media data corpus to be used as a basis for our research which includes, inter alia, speech and gaze direction recognition, dialogue modelling and proactivity of the system. We further aspire to obtain data with emotional content to perfom research on emotion recognition, psychopysiological and usability analysis.", "title": "" }, { "docid": "6c5a5bc775316efc278285d96107ddc6", "text": "STUDY DESIGN\nRetrospective study of 55 consecutive patients with spinal metastases secondary to breast cancer who underwent surgery.\n\n\nOBJECTIVE\nTo evaluate the predictive value of the Tokuhashi score for life expectancy in patients with breast cancer with spinal metastases.\n\n\nSUMMARY OF BACKGROUND DATA\nThe score, composed of 6 parameters each rated from 0 to 2, has been proposed by Tokuhashi and colleagues for the prognostic assessment of patients with spinal metastases.\n\n\nMETHODS\nA total of 55 patients surgically treated for vertebral metastases secondary to breast cancer were studied. The score was calculated for each patient and, according to Tokuhashi, the patients were divided into 3 groups with different life expectancy according to their total number of scoring points. In a second step, the grouping for prognosis was modified to get a better correlation of the predicted and definitive survival.\n\n\nRESULTS\nApplying the Tokuhashi score for the estimation of life expectancy of patients with breast cancer with vertebral metastases provided very reliable results. However, the original analysis by Tokuhashi showed a limited correlation between predicted and real survival for each prognostic group. Therefore, our patients were divided into modified prognostic groups regarding their total number of scoring points, leading to a higher significance of the predicted prognosis in each group (P < 0.0001), and a better correlation of the predicted and real survival.\n\n\nCONCLUSION\nThe modified Tokuhashi score assists in decision making based on reliable estimators of life expectancy in patients with spinal metastases secondary to breast cancer.", "title": "" }, { "docid": "c61c350d6c7bfe7eaae2cd4b2aa452cf", "text": "It is a well-established finding that the central executive is fractionated in at least three separable component processes: Updating, Shifting, and Inhibition of information (Miyake et al., 2000). However, the fractionation of the central executive among the elderly has been less well explored, and Miyake's et al. latent structure has not yet been integrated with other models that propose additional components, such as access to long-term information. Here we administered a battery of classic and newer neuropsychological tests of executive functions to 122 healthy individuals aged between 48 and 91 years. The test scores were subjected to a latent variable analysis (LISREL), and yielded four factors. The factor structure obtained was broadly consistent with Miyake et al.'s three-factor model. However, an additional factor, which was labeled 'efficiency of access to long-term memory', and a mediator factor ('speed of processing') were apparent in our structural equation analysis. Furthermore, the best model that described executive functioning in our sample of healthy elderly adults included a two-factor solution, thus indicating a possible mechanism of dedifferentiation, which involves larger correlations and interdependence of latent variables as a consequence of cognitive ageing. These results are discussed in the light of current models of prefrontal cortex functioning.", "title": "" }, { "docid": "2e66317dfe4005c069ceac2d4f9e3877", "text": "The Semantic Web presents the vision of a distributed, dynamically growing knowledge base founded on formal logic. Common users, however, seem to have problems even with the simplest Boolean expression. As queries from web search engines show, the great majority of users simply do not use Boolean expressions. So how can we help users to query a web of logic that they do not seem to understand? We address this problem by presenting Ginseng, a quasi natural language guided query interface to the Semantic Web. Ginseng relies on a simple question grammar which gets dynamically extended by the structure of an ontology to guide users in formulating queries in a language seemingly akin to English. Based on the grammar Ginseng then translates the queries into a Semantic Web query language (RDQL), which allows their execution. Our evaluation with 20 users shows that Ginseng is extremely simple to use without any training (as opposed to any logic-based querying approach) resulting in very good query performance (precision = 92.8%, recall = 98.4%). We, furthermore, found that even with its simple grammar/approach Ginseng could process over 40% of questions from a query corpus without modification.", "title": "" }, { "docid": "739aaf487d6c5a7b7fe9d0157d530382", "text": "A blockchain framework is presented for addressing the privacy and security challenges associated with the Big Data in smart mobility. It is composed of individuals, companies, government and universities where all the participants collect, own, and control their data. Each participant shares their encrypted data to the blockchain network and can make information transactions with other participants as long as both party agrees to the transaction rules (smart contract) issued by the owner of the data. Data ownership, transparency, auditability and access control are the core principles of the proposed blockchain for smart mobility Big Data.", "title": "" }, { "docid": "a15c94c0ec40cb8633d7174b82b70a16", "text": "Koenigs, Young and colleagues [1] recently tested patients with emotion-related damage in the ventromedial prefrontal cortex (VMPFC) usingmoral dilemmas used in previous neuroimaging studies [2,3]. These patients made unusually utilitarian judgments (endorsing harmful actions that promote the greater good). My collaborators and I have proposed a dual-process theory of moral judgment [2,3] that we claim predicts this result. In a Research Focus article published in this issue of Trends in Cognitive Sciences, Moll and de Oliveira-Souza [4] challenge this interpretation. Our theory aims to explain some puzzling patterns in commonsense moral thought. For example, people usually approve of diverting a runaway trolley thatmortally threatens five people onto a side-track, where it will kill only one person. And yet people usually disapprove of pushing someone in front of a runaway trolley, where this will kill the person pushed, but save five others [5]. Our theory, in a nutshell, is this: the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response (supported in part by the medial prefrontal cortex) that drives moral disapproval [2,3]. People also engage in utilitarian moral reasoning (aggregate cost–benefit analysis), which is likely subserved by the dorsolateral prefrontal cortex (DLPFC) [2,3]. When there is no prepotent emotional response, utilitarian reasoning prevails (as in the first case), but sometimes prepotent emotions and utilitarian reasoning conflict (as in the second case). This conflict is detected by the anterior cingulate cortex, which signals the need for cognitive control, to be implemented in this case by the anterior DLPFC [Brodmann’s Areas (BA) 10/46]. Overriding prepotent emotional responses requires additional cognitive control and, thus, we find increased activity in the anterior DLPFC when people make difficult utilitarian moral judgments [3]. More recent studies support this theory: if negative emotions make people disapprove of pushing the man to his death, then inducing positive emotion might lead to more utilitarian approval, and this is indeed what happens [6]. Likewise, patients with frontotemporal dementia (known for their ‘emotional blunting’) should more readily approve of pushing the man in front of the trolley, and they do [7]. This finding directly foreshadows the hypoemotional VMPFC patients’ utilitarian responses to this and other cases [1]. Finally, we’ve found that cognitive load selectively interferes with utilitarian moral judgment,", "title": "" }, { "docid": "fe25930abd98cba844a6e7a849dae621", "text": "Research in Autonomous Mobile Manipulation critically depends on the availability of adequate experimental platforms. In this paper, we describe an ongoing effort at the University of Massachusetts Amherst to construct a hardware platform with redundant kinematic degrees of freedom, a comprehensive sensor suite, and significant end-effector capabilities for manipulation. In our research, we pursue an end-effector centric view of autonomous mobile manipulation. In support of this view, we are developing a comprehensive software suite to provide a high level of competency in robot control and perception. This software suite is based on a multi-objective, tasklevel motion control framework. We use this control framework to integrate a variety of motion capabilities, including taskbased force or position control of the end-effector, collision-free global motion for the entire mobile manipulator, and mapping and navigation for the mobile base. We also discuss our efforts in developing perception capabilities targeted to problems in autonomous mobile manipulation. Preliminary experiments on our UMass Mobile Manipulator (UMan) are presented.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" }, { "docid": "021bed3f2c2f09db1bad7d11108ee430", "text": "This is a review of Introduction to Circle Packing: The Theory of Discrete Analytic Functions, by Kenneth Stephenson, Cambridge University Press, Cambridge UK, 2005, pp. i-xii, 1–356, £42, ISBN-13 978-0-521-82356-2. 1. The Context: A Personal Reminiscence Two important stories in the recent history of mathematics are those of the geometrization of topology and the discretization of geometry. Having come of age during the unfolding of these stories as both observer and practitioner, this reviewer does not hold the detachment of the historian and, perhaps, can be forgiven the personal accounting that follows, along with its idiosyncratic telling. The first story begins at a time when the mathematical world is entrapped by abstraction. Bourbaki reigns and generalization is the cry of the day. Coxeter is a curious doddering uncle, at best tolerated, at worst vilified as a practitioner of the unsophisticated mathematics of the nineteenth century. 1.1. The geometrization of topology. It is 1978 and I have just begun my graduate studies in mathematics. There is some excitement in the air over ideas of Bill Thurston that purport to offer a way to resolve the Poincaré conjecture by using nineteenth century mathematics—specifically, the noneuclidean geometry of Lobachevski and Bolyai—to classify all 3-manifolds. These ideas finally appear in a set of notes from Princeton a couple of years later, and the notes are both fascinating and infuriating—theorems are left unstated and often unproved, chapters are missing never to be seen, the particular dominates—but the notes are bulging with beautiful and exciting ideas, often with but sketches of intricate arguments to support the landscape that Thurston sees as he surveys the topology of 3-manifolds. Thurston’s vision is a throwback to the previous century, having much in common with the highly geometric, highly particular landscape that inspired Felix Klein and Max Dehn. These geometers walked around and within Riemann surfaces, one of the hot topics of the day, knew them intimately, and understood them in their particularity, not from the rarified heights that captured the mathematical world in general, and topology in particular, in the period from the 1930’s until the 1970’s. The influence of Thurston’s Princeton notes on the development of topology over the next 30 years would be pervasive, not only in its mathematical content, but AMS SUBJECT CLASSIFICATION: 52C26", "title": "" } ]
scidocsrr
288bb9b51e2d6cf4ee6c7fbcffc650e8
Research Note - Gamification of Technology-Mediated Training: Not All Competitions Are the Same
[ { "docid": "f4641f1aa8c2553bb41e55973be19811", "text": "this paper focuses on employees’ e-learning processes during online job training. A new categorization of self-regulated learning strategies, that is, personal versus social learning strategies, is proposed, and measurement scales are developed. the new measures were tested using data collected from employees in a large company. Our approach provides context-relevant insights into online training providers and employees themselves. the results suggest that learners adopt different self-regulated learning strategies resulting in different e-learning outcomes. Furthermore, the use of self-regulated learning strategies is influenced by individual factors such as virtual competence and goal orientation, and job and contextual factors such as intellectual demand and cooperative norms. the findings can (1) help e-learners obtain better learning outcomes through their active use of varied learning strategies, (2) provide useful information for organizations that are currently using or plan to use e-learning 308 WAN, COMPEAu, AND hAggErty for training, and (3) inform software designers to integrate self-regulated learning strategy support in e-learning system design and development. Key WorDs anD phrases: e-learning, job training, learning outcomes, learning processes, self-regulated learning strategies, social cognitive theory. employee training has beCome an effeCtive Way to enhance organizational productivity. It is even more important today given the fast-changing nature of current work practices. research has shown that 50 percent of all employee skills become outdated within three to five years [67]. the cycle is even shorter for information technology (It) professionals because of the high rate of technology innovation. On the one hand, this phenomenon requires organizations to focus more on building internal capabilities by providing different kinds of job preparation and training. On the other hand, it suggests that a growing number of employees are seeking learning opportunities to regularly upgrade their skills and competencies. Consequently, demand is growing for ongoing research to determine optimal training approaches with real performance impact. unlike traditional courses provided by educational institutions that are focused on fundamental and relatively stable knowledge, corporate training programs must be developed within short time frames because their content quickly becomes outdated. Furthermore, for many large organizations, especially multinationals with constantly growing and changing global workforces, the management of training and learning has become increasingly complex. Difficulties arise due to the wide range of courses, the high volume of course materials, the coordination of training among distributed work locations with the potential for duplicated training services, the need to satisfy varied individual learning requests and competency levels, and above all, the need to contain costs while deriving value from training expenditures. the development of information systems (IS) has contributed immensely to solving workplace training problems. E-learning has emerged as a cost-effective way to deliver training at convenient times to a large number of employees in different locations. E-learning, defined as a virtual learning environment in which learners’ interactions with learning materials, peers, and instructors are mediated through Its, has become the fastest-growing form of education [4]. the American Society for training and Development found that even with the challenges of the recent economic crisis, u.S. organizations spent $134.07 billion on employee learning and development in 2008 [74], and earlier evidence suggested that close to 40 percent of training was delivered using e-learning technologies [73]. E-learning has been extended from its original application in It skill training to common business skill training, including management, leadership, communication, customer service, quality management, and human resource skills. Despite heavy investments in e-learning technologies, however, recent research suggests that organizations have not received the level of benefit from e-learning that was E-lEArNINg OutCOMES IN OrgANIZAtIONAl SEttINgS 309 originally anticipated [62]. One credible explanation has emerged from educational psychology showing that learners are neither motivated nor well prepared for the new e-learning environment [14]. Early IS research on e-learning focused on the technology design aspects of e-learning but has subsequently broadened to include all aspects of e-learning inputs (participant characteristics, technology design, instructional strategies), processes (psychological processes, learning behaviors), and outcomes (learning outcomes) [4, 55, 76]. however, less IS research has focused on the psychological processes users engage in that improve or limit their e-learning outcomes [76]. In this research, we contribute to the understanding of e-learning processes by bridging two bodies of literature, that is, self-regulated learning (Srl) in educational psychology and e-learning in IS research. More specifically, we focus on two research questions: RQ1: How do learners’ different e‐learning processes (e.g., using different SRL strategies) influence their learning outcomes? RQ2: How is a learner’s use of SRL strategies influenced by individual and con‐ textual factors salient within a business context? to address the first question, we extend prior research on Srl and propose a new conceptualization that distinguishes two types of Srl strategies: personal Srl strategies, such as self‐evaluation and goal setting and planning, for managing personally directed forms of learning; and social Srl strategies, such as seeking peer assistance and social comparison, for managing social-oriented forms of learning. Prior research (e.g., [64, 88]) suggests that the use of Srl strategies in general can improve learning outcomes. We propose to explore, describe, and measure a new type of Srl strategy—social Srl strategy—and to determine if it has an equally important influence on learning outcomes as the more widely studied personal Srl strategy. We theorize that both types of Srl strategies are influential during the learning process and expect they have different effects on e-learning outcomes. to examine the role of Srl strategies in e-learning, we situated the new constructs in a nomological network based on prior research [76]. this led to our second research question, which also deals more specifically with e-learning in business organizations. While research conducted in educational institutions can definitely inform business training practices, differences in the business context such as job requirements and competitive pressures may affect e-learning outcomes. From prior research we selected four antecedent factors that we hypothesize to be important influences on individual use of Srl strategies (both personal and our newly proposed social strategies). the first two are individual factors. learners’ goal orientation refers to the individual’s framing of the activity as either a performance or a mastery activity, where the former is associated with flawless performance and the latter is associated with developing capability [28]. Virtual competence, the second factor, reflects the individual’s capability to function in a virtual environment [78]. We also include two contextual factors that are particularly applicable to organizational settings: the intellectual demands of learners’ jobs and the group norms perceived by learners about cooperation among work group members. 310 WAN, COMPEAu, AND hAggErty In summary, this study contributes to e-learning research by focusing on adult learners’ Srl processes in job training contexts. It expands the nomological network of e-learning by identifying and elaborating social Srl strategy as an additional form of Srl strategy that is distinct from personal Srl strategy. We further test how different types of Srl strategies applied by learners during the e-learning process affect three types of e-learning outcomes. Our results suggest that learners using different Srl strategies achieve different learning outcomes and learners’ attributes and contextual factors do matter. theoretical background Social Cognitive theory and Self-regulation learning is the proCess of aCquiring, enhanCing, or moDifying an individual’s knowledge, skills, and values [39]. In this study, we apply social cognitive theory to investigate e-learning processes in organizational settings. Self-regulation is a distinctive feature of social cognitive theory and plays a central role in the theory’s application [56]. It refers to a set of principles and practices by which people monitor their own behaviors and consciously adjust those behaviors in pursuit of personal goals [8]. Srl is thus a proactive way of learning in which people manage their own learning processes. research has shown that self-regulated learners (i.e., individuals who intentionally manage their learning processes) can learn better than non-selfregulated learners in traditional academic and organizational training settings because they view learning as a systematic and controllable process and are willing to take greater responsibility for their learning [30, 64, 88, 92, 93]. the definition of Srl as the degree to which individuals are metacognitively, motivationally, and behaviorally active participants in their own learning process is an integration of previous research on learning strategies, metacognitive monitoring, self-concept perceptions, volitional strategies, and self-control [86, 89]. According to this conceptualization, Srl is a combination of three subprocesses: metacognitive processes, which include planning and organizing during learning; motivational processes, which include self-evaluation and self-consequences at various stages; and behavioral processes, which include sele", "title": "" } ]
[ { "docid": "eea49870d2ddd24a42b8b245edbb1fc0", "text": "In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing. In video compressive sensing one frame is acquired using a set of coded masks (sensing matrix) from which a number of video frames, equal to the number of coded masks, is reconstructed. The proposed framework is an endto-end model where the sensing matrix is trained along with the video reconstruction. The encoder maps a video block to compressive measurements by learning the binary elements of the sensing matrix. The decoder is trained to map the measurements from a video patch back to a video block via several hidden layers of a Multi-Layer Perceptron network. The predicted video blocks are stacked together to recover the unknown video sequence. The reconstruction performance is found to improve when using the trained sensing mask from the network as compared to other mask designs such as random, across a wide variety of compressive sensing reconstruction algorithms. Finally, our analysis and discussion offers insights into understanding the characteristics of the trained mask designs that lead to the improved reconstruction quality.", "title": "" }, { "docid": "e769f52b6e10ea1cf218deb8c95f4803", "text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.", "title": "" }, { "docid": "22a5c41441519d259d3be70a9413f1f5", "text": "In this paper, a 3-degrees-of-freedom parallel manipulator developed by Tsai and Stamper known as the Maryland manipulator is considered. In order to provide dynamic analysis, three different sequential trajectories are taken into account. Two different control approaches such as the classical proportional-integral-derivative (PID) and fractional-order PID control are used to improve the tracking performance of the examined manipulator. Parameters of the controllers are determined by using pattern search algorithm and mathematical methods for the classical PID and fractional-order PID controllers, respectively. Design procedures for both controllers are given in detail. Finally, the corresponding results are compared. Performance analysis for both of the proposed controllers is confirmed by simulation results. It is observed that not only transient but also steady-state error values have been reduced with the aid of the PIλDμ controller for tracking control purpose. According to the obtained results, the fractional-order PIλDμ controller is more powerful than the optimally tuned PID for the Maryland manipulator tracking control. The main contribution of this paper is to determine the control action with the aid of the fractional-order PI λDμ controller different from previously defined controller structures. The determination of correct and accurate control action has great importance when high speed, high acceleration, and high accuracy needed for the trajectory tracking control of parallel mechanisms present unique challenges.", "title": "" }, { "docid": "1c058d6a648b2190500340f762eeff78", "text": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.", "title": "" }, { "docid": "9a842e6c42c1fdd6af3885370d50005f", "text": "Text classification is a fundamental problem in natural language processing. As a popular deep learning model, convolutional neural network(CNN) has demonstrated great success in this task. However, most existing CNN models apply convolution filters of fixed window size, thereby unable to learn variable n-gram features flexibly. In this paper, we present a densely connected CNN with multi-scale feature attention for text classification. The dense connections build short-cut paths between upstream and downstream convolutional blocks, which enable the model to compose features of larger scale from those of smaller scale, and thus produce variable n-gram features. Furthermore, a multi-scale feature attention is developed to adaptively select multi-scale features for classification. Extensive experiments demonstrate that our model obtains competitive performance against state-of-the-art baselines on six benchmark datasets. Attention visualization further reveals the model’s ability to select proper n-gram features for text classification. Our code is available at: https://github.com/wangshy31/DenselyConnected-CNN-with-Multiscale-FeatureAttention.git.", "title": "" }, { "docid": "db9887ea5f96cd4439ca95ad3419407c", "text": "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photo-consistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras.", "title": "" }, { "docid": "26c259c7b6964483d13a85938a11cf53", "text": "In Natural Language Processing (NLP), research results from software engineering and software technology have often been neglected. This paper describes some factors that add complexity to the task of engineering reusable NLP systems (beyond conventional software systems). Current work in the area of design patterns and composition languages is described and claimed relevant for natural language processing. The benefits of NLP componentware and barriers to reuse are outlined, and the dichotomies “system versus experiment” and “toolkit versus framework” are discussed. It is argued that in order to live up to its name language engineering must not neglect component quality and architectural evaluation when reporting new NLP research.", "title": "" }, { "docid": "ef1f5eaa9c6f38bbe791e512a7d89dab", "text": "Lexical-semantic verb classifications have proved useful in supporting various natural language processing (NLP) tasks. The largest and the most widely deployed classification in English is Levin’s (1993) taxonomy of verbs and their classes. While this resource is attractive in being extensive enough for some NLP use, it is not comprehensive. In this paper, we present a substantial extension to Levin’s taxonomy which incorporates 57 novel classes for verbs not covered (comprehensively) by Levin. We also introduce 106 novel diathesis alternations, created as a side product of constructing the new classes. We demonstrate the utility of our novel classes by using them to support automatic subcategorization acquisition and show that the resulting extended classification has extensive coverage over the English verb lexicon.", "title": "" }, { "docid": "7cff04976bf78c5d8a1b4338b2107482", "text": "Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.", "title": "" }, { "docid": "13db8fe917d303f942fcfb544440ec24", "text": "In many types of information systems, users face an implicit tradeoff between disclosing personal information and receiving benefits, such as discounts by an electronic commerce service that requires users to divulge some personal information. While these benefits are relatively measurable, the value of privacy involved in disclosing the information is much less tangible, making it hard to design and evaluate information systems that manage personal information. Meanwhile, existing methods to assess and measure the value of privacy, such as self-reported questionnaires, are notoriously unrelated of real eworld behavior. To overcome this obstacle, we propose a methodology called VOPE (Value of Privacy Estimator), which relies on behavioral economics' Prospect Theory (Kahneman & Tversky, 1979) and valuates people's privacy preferences in information disclosure scenarios. VOPE is based on an iterative and responsive methodology in which users take or leave a transaction that includes a component of information disclosure. To evaluate the method, we conduct an empirical experiment (n 1⁄4 195), estimating people's privacy valuations in electronic commerce transactions. We report on the convergence of estimations and validate our results by comparing the values to theoretical projections of existing results (Tsai, Egelman, Cranor, & Acquisti, 2011), and to another independent experiment that required participants to rank the sensitivity of information disclosure transactions. Finally, we discuss how information systems designers and regulators can use VOPE to create and to oversee systems that balance privacy and utility. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "92008a84a80924ec8c0ad1538da2e893", "text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.", "title": "" }, { "docid": "4f81901c2269cd4561dd04f59a04a473", "text": "The advent of powerful acid-suppressive drugs, such as proton pump inhibitors (PPIs), has revolutionized the management of acid-related diseases and has minimized the role of surgery. The major and universally recognized indications for their use are represented by treatment of gastro-esophageal reflux disease, eradication of Helicobacter pylori infection in combination with antibiotics, therapy of H. pylori-negative peptic ulcers, healing and prophylaxis of non-steroidal anti-inflammatory drug-associated gastric ulcers and control of several acid hypersecretory conditions. However, in the last decade, we have witnessed an almost continuous growth of their use and this phenomenon cannot be only explained by the simple substitution of the previous H2-receptor antagonists, but also by an inappropriate prescription of these drugs. This endless increase of PPI utilization has created an important problem for many regulatory authorities in terms of increased costs and greater potential risk of adverse events. The main reasons for this overuse of PPIs are the prevention of gastro-duodenal ulcers in low-risk patients or the stress ulcer prophylaxis in non-intensive care units, steroid therapy alone, anticoagulant treatment without risk factors for gastro-duodenal injury, the overtreatment of functional dyspepsia and a wrong diagnosis of acid-related disorder. The cost for this inappropriate use of PPIs has become alarming and requires to be controlled. We believe that gastroenterologists together with the scientific societies and the regulatory authorities should plan educational initiatives to guide both primary care physicians and specialists to the correct use of PPIs in their daily clinical practice, according to the worldwide published guidelines.", "title": "" }, { "docid": "f5f56d680fbecb94a08d9b8e5925228f", "text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.", "title": "" }, { "docid": "497d6e0bf6f582924745c7aa192579e7", "text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.", "title": "" }, { "docid": "54af3c39dba9aafd5b638d284fd04345", "text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).", "title": "" }, { "docid": "318a4af201ed3563443dcbe89c90b6b4", "text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation", "title": "" }, { "docid": "f1cfb30b328725121ed232381d43ac3a", "text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1", "title": "" }, { "docid": "47faebfa7d65ebf277e57436cf7c2ca4", "text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable", "title": "" }, { "docid": "0edc89fbf770bbab2fb4d882a589c161", "text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.", "title": "" }, { "docid": "8d7a7bc2b186d819b36a0a8a8ba70e39", "text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.", "title": "" } ]
scidocsrr
a67dd6f5d3c53ff3f4d03be551c4df47
Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses
[ { "docid": "8851824732fff7b160c7479b41cc423f", "text": "The current generation of Massive Open Online Courses (MOOCs) attract a diverse student audience from all age groups and over 196 countries around the world. Researchers, educators, and the general public have recently become interested in how the learning experience in MOOCs differs from that in traditional courses. A major component of the learning experience is how students navigate through course content.\n This paper presents an empirical study of how students navigate through MOOCs, and is, to our knowledge, the first to investigate how navigation strategies differ by demographics such as age and country of origin. We performed data analysis on the activities of 140,546 students in four edX MOOCs and found that certificate earners skip on average 22% of the course content, that they frequently employ non-linear navigation by jumping backward to earlier lecture sequences, and that older students and those from countries with lower student-teacher ratios are more comprehensive and non-linear when navigating through the course.\n From these findings, we suggest design recommendations such as for MOOC platforms to develop more detailed forms of certification that incentivize students to deeply engage with the content rather than just doing the minimum necessary to earn a passing grade. Finally, to enable other researchers to reproduce and build upon our findings, we have made our data set and analysis scripts publicly available.", "title": "" }, { "docid": "090cc7f7e5dbf925e0ded1ca5514c76e", "text": "A general framework is presented to help understand the relationship between motivation and self-regulated learning. According to the framework, self-regulated learning can be facilitated by the adoption of mastery and relative ability goals and hindered by the adoption of extrinsic goals. In addition, positive self-e$cacy and task value beliefs can promote selfregulated behavior. Self-regulated learning is de\"ned as the strategies that students use to regulate their cognition (i.e., use of various cognitive and metacognitive strategies) as well as the use of resource management strategies that students use to control their learning. ( 1999 Published by Elsevier Science Ltd. All rights reserved. Recent models of self-regulated learning stress the importance of integrating both motivational and cognitive components of learning (Garcia & Pintrich, 1994; Pintrich, 1994; Pintrich & Schrauben, 1992). The purpose of this chapter is to describe how di!erent motivational beliefs may help to promote and sustain di!erent aspects of self-regulated learning. In order to accomplish this purpose, a model of self-regulated learning is brie#y sketched and three general motivational beliefs related to a model of self-regulated learning in our research program at the University of Michigan are discussed. Finally, suggestions for future research are o!ered. 1. A model of self-regulated learning Self-regulated learning o!ers an important perspective on academic learning in current research in educational psychology (Schunk & Zimmerman, 1994). Although there are a number of di!erent models derived from a variety of di!erent theoretical perspectives (see Schunk & Zimmerman, 1994; Zimmerman & Schunk, 1989), most models assume that an important aspect of self-regulated learning is the 0883-0355/99/$ see front matter ( 1999 Published by Elsevier Science Ltd. All rights reserved. PII: S 0 8 8 3 0 3 5 5 ( 9 9 ) 0 0 0 1 5 4 students' use of various cognitive and metacognitive strategies to control and regulate their learning. The model of self-regulated learning described here includes three general categories of strategies: (1) cognitive learning strategies, (2) self-regulatory strategies to control cognition, and (3) resource management strategies (see Garcia & Pintrich, 1994; Pintrich, 1988a,b; Pintrich, 1989; Pintrich & De Groot, 1990; Pintrich & Garcia, 1991; Pintrich, Smith, Garcia, & McKeachie, 1993). 1.1. Cognitive learning strategies In terms of cognitive learning strategies, following the work of Weinstein and Mayer (1986), rehearsal, elaboration, and organizational strategies were identi\"ed as important cognitive strategies related to academic performance in the classroom (McKeachie, Pintrich, Lin & Smith, 1986; Pintrich, 1989; Pintrich & De Groot, 1990). These strategies can be applied to simple memory tasks (e.g., recall of information, words, or lists) or to more complex tasks that require comprehension of the information (e.g., understanding a piece of text or a lecture) (Weinstein & Mayer, 1986). Rehearsal strategies involve the recitation of items to be learned or the saying of words aloud as one reads a piece of text. Highlighting or underlining text in a rather passive and unre#ective manner also can be more like a rehearsal strategy than an elaborative strategy. These rehearsal strategies are assumed to help the student attend to and select important information from lists or texts and keep this information active in working memory, albeit they may not re#ect a very deep level of processing. Elaboration strategies include paraphrasing or summarizing the material to be learned, creating analogies, generative note-taking (where the student actually reorganizes and connects ideas in their notes in contrast to passive, linear note-taking), explaining the ideas in the material to be learned to someone else, and question asking and answering (Weinstein & Mayer, 1986). The other general type of deeper processing strategy, organizational, includes behaviors such as selecting the main idea from text, outlining the text or material to be learned, and using a variety of speci\"c techniques for selecting and organizing the ideas in the material (e.g., sketching a network or map of the important ideas, identifying the prose or expository structures of texts). (See Weinstein & Mayer, 1986.) All of these organizational strategies have been shown to result in a deeper understanding of the material to be learned in contrast to rehearsal strategies (Weinstein & Mayer, 1986). 1.2. Metacognitive and self-regulatory strategies Besides cognitive strategies, students' metacognitive knowledge and use of metacognitive strategies can have an important in#uence upon their achievement. There are two general aspects of metacognition, knowledge about cognition and selfregulation of cognition (Brown, Bransford, Ferrara & Campione, 1983; Flavell, 1979). Some of the theoretical and empirical confusion over the status of metacognition as a psychological construct has been fostered by the confounding of issues of metacognitive knowledge and awareness with metacognitive control and self-regulation 460 P.R. Pintrich / Int. J. Educ. Res. 31 (1999) 459}470", "title": "" } ]
[ { "docid": "289502f02cf7ef236bb7752b4ca80601", "text": "We examined variation in leaf size and specific leaf area (SLA) in relation to the distribution of 22 chaparral shrub species on small-scale gradients of aspect and elevation. Potential incident solar radiation (insolation) was estimated from a geographic information system to quantify microclimate affinities of these species across north- and south-facing slopes. At the community level, leaf size and SLA both declined with increasing insolation, based on average trait values for the species found in plots along the gradient. However, leaf size and SLA were not significantly correlated across species, suggesting that these two traits are decoupled and associated with different aspects of performance along this environmental gradient. For individual species, SLA was negatively correlated with species distributions along the insolation gradient, and was significantly lower in evergreen versus deciduous species. Leaf size exhibited a negative but non-significant trend in relation to insolation distribution of individual species. At the community level, variance in leaf size increased with increasing insolation. For individual species, there was a greater range of leaf size on south-facing slopes, while there was an absence of small-leaved species on north-facing slopes. These results demonstrate that analyses of plant functional traits along environmental gradients based on community level averages may obscure important aspects of trait variation and distribution among the constituent species.", "title": "" }, { "docid": "04d190daef0abb78f3c4d85e23297fbc", "text": "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.", "title": "" }, { "docid": "dffb192cda5fd68fbea2eb15a6b00434", "text": "For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire; while semantic role labeling (SRL) systems can extract sentence level role information about individual mentions of a process, their results are often noisy and they do not attempt create a globally consistent characterization of a process. To overcome this, we extend standard within sentence joint inference to inference across multiple sentences. This cross sentence inference promotes role assignments that are compatible across different descriptions of the same process. When formulated as an Integer Linear Program, this leads to improvements over within-sentence inference by nearly 3% in F1. The resulting role-based knowledge is of high quality (with a F1 of nearly 82).", "title": "" }, { "docid": "f78e430994e9eeccd034df76d2b5316a", "text": "An externally leveraged circular resonant piezoelectric actuator with haptic natural frequency and fast response time was developed within the volume of 10 mm diameter and 3.4 mm thickness for application in mobile phones. An efficient displacement-amplifying mechanism was developed using a piezoelectric bimorph, a lever system, and a mass-spring system. The proposed displacement-amplifying mechanism utilizes both internally and externally leveraged structures. The former generates bending by means of bending deformation of the piezoelectric bimorph, and the latter transforms the bending to radial displacement of the lever system, which is transformed to a large axial displacement of the spring. The piezoelectric bimorph, lever system, and spring were designed to maximize static displacement and the mass-spring system was designed to have a haptic natural frequency. The static displacement, natural frequency, maximum output displacement, and response time of the resonant piezoelectric actuator were calculated by means of finite-element analyses. The proposed resonant piezoelectric actuator was prototyped and the simulated results were verified experimentally. The prototyped piezoelectric actuator generated the maximum output displacement of 290 μm at the haptic natural frequency of 242 Hz. Owing to the proposed efficient displacement-amplifying mechanism, the proposed resonant piezoelectric actuator had the fast response time of 14 ms, approximately one-fifth of a conventional resonant piezoelectric actuator of the same size.", "title": "" }, { "docid": "caf6537362b79cad5f631c0227e7d141", "text": "In this paper, we present POSTECH Situation-Based Dialogue Manager (POSSDM) for a spoken dialogue system using both example- and rule-based dialogue management techniques for effective generation of appropriate system responses. A spoken dialogue system should generate cooperative responses to smoothly control dialogue flow with the users. We introduce a new dialogue management technique incorporating dialogue examples and situation-based rules for the electronic program guide (EPG) domain. For the system response generation, we automatically construct and index a dialogue example database from the dialogue corpus, and the proper system response is determined by retrieving the best dialogue example for the current dialogue situation, which includes a current user utterance, dialogue act, semantic frame and discourse history. When the dialogue corpus is not enough to cover the domain, we also apply manually constructed situation-based rules mainly for meta-level dialogue management. Experiments show that our example-based dialogue modeling is very useful and effective in domain-oriented dialogue processing", "title": "" }, { "docid": "23ffed5fcb708ad4f95a70f5b0fe4793", "text": "INTRODUCTION\nHead tremor is a common feature in cervical dystonia (CD) and often less responsive to botulinum neurotoxin (BoNT) treatment than dystonic posturing. Ultrasound allows accurate targeting of deeper neck muscles.\n\n\nMETHODS\nIn 35 CD patients with dystonic head tremor the depth and thickness of the splenius capitis (SPL), semispinalis capitis and obliquus capitis inferior muscles (OCI) were assessed using ultrasound. Ultrasound guided EMG recordings were performed from the SPL and OCI.\n\n\nRESULTS\nBurst-like tremor activity was present in both OCI in 25 and in one in 10 patients. In 18 patients, tremor activity was present in one SPL and in 2 in both SPL. Depth and thickness of OCI, SPL and semispinalis capitis muscles were very variable.\n\n\nCONCLUSION\nMuscular activity underlying tremulous CD is most commonly present in OCI. Due to the variability of muscle thickness, we suggest ultrasound guided BoNT injections into OCI.", "title": "" }, { "docid": "dfb3af39b0cf47540c1eda10eb4b35d9", "text": "Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula.", "title": "" }, { "docid": "92c6e4ec2497c467eaa31546e2e2be0e", "text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.", "title": "" }, { "docid": "f5960b6997d1b481353b50a15e80c844", "text": "In this paper we introduce a dynamic GUI test generator that incorporates ant colony optimization. We created two ant systems for generating tests. Our first ant system implements the normal ant colony optimization algorithm in order to traverse the GUI and find good event sequences. Our second ant system, called AntQ, implements the antq algorithm that incorporates Q-Learning, which is a behavioral reinforcement learning technique. Both systems use the same fitness function in order to determine good paths through the GUI. Our fitness function looks at the amount of change in the GUI state that each event causes. Events that have a larger impact on the GUI state will be favored in future tests. We compared our two ant systems to random selection. We ran experiments on six subject applications and report on the code coverage and fault finding abilities of all three algorithms.", "title": "" }, { "docid": "3f53b5e2143364506c4f2de4c8d98979", "text": "In this paper, a different method for designing an ultra-wideband (UWB) microstrip monopole antenna with dual band-notched characteristic has been presented. The main novelty of the proposed structure is the using of protruded strips as resonators to design an UWB antenna with dual band-stop property. In the proposed design, by cutting the rectangular slot with a pair of protruded T-shaped strips in the ground plane, additional resonance is excited and much wider impedance bandwidth can be produced. To generate a single band-notched function, we convert the square radiating patch to the square-ring structure with a pair of protruded step-shaped strips. By cutting a rectangular slot with the protruded Γ-shaped strip at the feed line, a dual band-notched function is achieved. The measured results reveal that the presented dual band-notched antenna offers a very wide bandwidth from 2.8 to 11.6 GHz, with two notched bands, around of 3.3-3.7 GHz and 5-6 GHz covering all WiMAX and WLAN bands.", "title": "" }, { "docid": "822a1487cbdeba5b8b3b35dd3593c4eb", "text": "Microsoft's series of Windows operating systems represents some of the most commonly encountered technologies in the field of digital forensics. It is then fair to say that Microsoft's design decisions greatly affect forensic efforts. Because of this, it is exceptionally important for the forensics community to keep abreast of new developments in the Windows product line. With each new release, the Windows operating system may present investigators with significant new artifacts to explore. Described by some as the heart of the Windows operating system, the Windows registry has been proven to contain many of these forensically interesting artifacts. Given the weight of Microsoft's influence on digital forensics and the role of the registry within Windows operating systems, this thesis delves into the Windows 8 registry in the hopes of developing new Windows forensics utilities.", "title": "" }, { "docid": "3378680ac3eddfde464e1be5ee6986e6", "text": "Boundaries between formal and informal learning settings are shaped by influences beyond learners’ control. This can lead to the proscription of some familiar technologies that learners may like to use from some learning settings. This contested demarcation is not well documented. In this paper, we introduce the term ‘digital dissonance’ to describe this tension with respect to learners’ appropriation of Web 2.0 technologies in formal contexts. We present the results of a study that explores learners’ inand out-of-school use of Web 2.0 and related technologies. The study comprises two data sources: a questionnaire and a mapping activity. The contexts within which learners felt their technologies were appropriate or able to be used are also explored. Results of the study show that a sense of ‘digital dissonance’ occurs around learners’ experience of Web 2.0 activity in and out of school. Many learners routinely cross institutionally demarcated boundaries, but the implications of this activity are not well understood by institutions or indeed by learners themselves. More needs to be understood about the transferability of Web 2.0 skill sets and ways in which these can be used to support formal learning.", "title": "" }, { "docid": "fdd4c5fc773aa001da927ab3776559ae", "text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.", "title": "" }, { "docid": "237ae26179780269fd814f0e2406f2c0", "text": "There is a growing trend of applying machine learning techniques in time series prediction tasks. In the meanwhile, the classic autoregression models has been widely used in time series prediction for decades. In this paper, experiments are conducted to compare the performances of multiple popular machine learning algorithms including two major types of deep learning approaches, with the classic autoregression with exogenous inputs (ARX) model on this particular Blood Glucose Level Prediction (BGLP) Challenge. We tried two types of methods to perform multi-step prediction: recursive method and direct method. The recursive method needs future input feature information. The results show there is no significant difference between the machine learning models and the classic ARX model. In fact, the ARX model achieved the lowest average Root Mean Square Error (RMSE) across subjects in the test data when recursive method was used for multi-step prediction.", "title": "" }, { "docid": "6aab23ee181e8db06cc4ca3f7f7367be", "text": "In their original article, Ericsson, Krampe, and Tesch-Römer (1993) reviewed the evidence concerning the conditions of optimal learning and found that individualized practice with training tasks (selected by a supervising teacher) with a clear performance goal and immediate informative feedback was associated with marked improvement. We found that this type of deliberate practice was prevalent when advanced musicians practice alone and found its accumulated duration related to attained music performance. In contrast, Macnamara, Moreau, and Hambrick's (2016, this issue) main meta-analysis examines the use of the term deliberate practice to refer to a much broader and less defined concept including virtually any type of sport-specific activity, such as group activities, watching games on television, and even play and competitions. Summing up every hour of any type of practice during an individual's career implies that the impact of all types of practice activity on performance is equal-an assumption that I show is inconsistent with the evidence. Future research should collect objective measures of representative performance with a longitudinal description of all the changes in different aspects of the performance so that any proximal conditions of deliberate practice related to effective improvements can be identified and analyzed experimentally.", "title": "" }, { "docid": "a677c1d46b9d2ad2588841eea8e3856c", "text": "In evolutionary multiobjective optimization, maintaining a good balance between convergence and diversity is particularly crucial to the performance of the evolutionary algorithms (EAs). In addition, it becomes increasingly important to incorporate user preferences because it will be less likely to achieve a representative subset of the Pareto-optimal solutions using a limited population size as the number of objectives increases. This paper proposes a reference vector-guided EA for many-objective optimization. The reference vectors can be used not only to decompose the original multiobjective optimization problem into a number of single-objective subproblems, but also to elucidate user preferences to target a preferred subset of the whole Pareto front (PF). In the proposed algorithm, a scalarization approach, termed angle-penalized distance, is adopted to balance convergence and diversity of the solutions in the high-dimensional objective space. An adaptation strategy is proposed to dynamically adjust the distribution of the reference vectors according to the scales of the objective functions. Our experimental results on a variety of benchmark test problems show that the proposed algorithm is highly competitive in comparison with five state-of-the-art EAs for many-objective optimization. In addition, we show that reference vectors are effective and cost-efficient for preference articulation, which is particularly desirable for many-objective optimization. Furthermore, a reference vector regeneration strategy is proposed for handling irregular PFs. Finally, the proposed algorithm is extended for solving constrained many-objective optimization problems.", "title": "" }, { "docid": "38ec5d33e0a24c9dc16854086bb069d7", "text": "The management of the medium and small scale industries feel burden to treat waste if the cost involvement is high. Hence there is a board scope for cheaper and compact unit processes or ideal solutions for such issues. Rotating biological contactor is most popular due to its simplicity, low energy less land requirement. The rotating biological contactors are fixed film moving bed aerobic treatment processes, which able to sustain shock loadings. Unlike activated sludge processes (ASP), trickling filter etc. Rotating biological contactor does not require recirculation of secondary sludge and also hydraulic retention time is low. This review paper focuses on works done by various investigators at different operating parameters using various kinds of industrial wastewater.", "title": "" }, { "docid": "c0f5abdba3aa843f4419f59c92ed14ea", "text": "ROC and DET curves are often used in the field of person authentication to assess the quality of a model or even to compare several models. We argue in this paper that this measure can be misleading as it compares performance measures that cannot be reached simultaneously by all systems. We propose instead new curves, called Expected Performance Curves (EPC). These curves enable the comparison between several systems according to a criterion, decided by the application, which is used to set thresholds according to a separate validation set. A free sofware is available to compute these curves. A real case study is used throughout the paper to illustrate it. Finally, note that while this study was done on an authentication problem, it also applies to most 2-class classification tasks.", "title": "" }, { "docid": "b229aa8b39b3df3fec941ce4791a2fe9", "text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.", "title": "" } ]
scidocsrr
4895de258fec5e9af707cd6437d58bc6
Media Effects: Theory and Research.
[ { "docid": "b8800cebbb3e94de68b45ab7d4a5a5ab", "text": "Objective: To explore effects of the technological interface on reading comprehension in a Norwegian school context. Participants: 72 tenth graders from two different primary schools in Norway. Method: The students were randomized into two groups, where the first group read two texts (1400–2000 words) in print, and the other group read the same texts as PDF on a computer screen. In addition pretests in reading comprehension, word reading and vocabulary were administered. A multiple regression analysis was carried out to investigate to what extent reading modality would influence the students’ scores on the reading comprehension measure. Conclusion: Main findings show that students who read texts in print scored significantly better on the reading comprehension test than students who read the texts digitally. Implications of these findings for policymaking and test development are discussed. 2012 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "11af505fd6448ec1e232c02a838a70bf", "text": "A great interest is recently paid to Electric Vehicles (EV) and their integration into electricity grids. EV can potentially play an important role in power system operation, however, the EV charging infrastructures have been only partly defined, considering them as limited to individual charging points, randomly distributed into the networks. This paper addresses the planning of public central charging stations (CCS) that can be integrated in low-voltage (LV) networks for EV parallel charging. The concepts of AC and DC architectures of CCS are proposed and a comparison is given on their investment cost. Investigation on location and size of CCS is conducted, analyzing two LV grids of different capacity. The results enlighten that a public CCS should be preferably located in the range of 100 m from the transformer. The AC charging levels of 11 kW and 22 kW have the highest potential in LV grids. The option of DC fast-charging is only possible in the larger capacity grids, withstanding the parallel charge of one or two vehicles.", "title": "" }, { "docid": "b922460e2a1d8b6dff6cc1c8c8c459ed", "text": "This paper presents a new dynamic latched comparator which shows lower input-referred latch offset voltage and higher load drivability than the conventional dynamic latched comparators. With two additional inverters inserted between the input- and output-stage of the conventional double-tail dynamic comparator, the gain preceding the regenerative latch stage was improved and the complementary version of the output-latch stage, which has bigger output drive current capability at the same area, was implemented. As a result, the circuit shows up to 25% less input-referred latch offset voltage and 44% less sensitivity of the delay versus the input voltage difference (delay/log(ΔVin)), which is about 17.2ps/decade, than the conventional double-tail latched comparator at approximately the same area and power consumption.", "title": "" }, { "docid": "137c65365ac3a9bb84a17f4a5bb8cfda", "text": "One-hot CNN (convolutional neural network) has been shown to be effective for text categorization (Johnson & Zhang, 2015a;b). We view it as a special case of a general framework which jointly trains a linear model with a non-linear feature generator consisting of ‘text region embedding + pooling’. Under this framework, we explore a more sophisticated region embedding method using Long Short-Term Memory (LSTM). LSTM can embed text regions of variable (and possibly large) sizes, whereas the region size needs to be fixed in a CNN. We seek effective and efficient use of LSTM for this purpose in the supervised and semi-supervised settings. The best results were obtained by combining region embeddings in the form of LSTM and convolution layers trained on unlabeled data. The results indicate that on this task, embeddings of text regions, which can convey complex concepts, are more useful than embeddings of single words in isolation. We report performances exceeding the previous best results on four benchmark datasets.", "title": "" }, { "docid": "8fc34b4785df3aa898e496e572591cce", "text": "The demand for ubiquitous information processing over the Web has called for the development of context-aware recommender systems capable of dealing with the problems of information overload and information filtering. Contemporary recommender systems harness context-awareness with the personalization to offer the most accurate recommendations about different products, services, and resources. However, such systems come across the issues, such as sparsity, cold start, and scalability that lead to imprecise recommendations. Computational Intelligence (CI) techniques not only improve recommendation accuracy but also substantially mitigate the aforementioned issues. Large numbers of context-aware recommender systems are based on the CI techniques, such as: (a) fuzzy sets, (b) artificial neural networks, (c) evolutionary computing, (d) swarm intelligence, and (e) artificial immune systems. This survey aims to encompass the state-of-the-art context-aware recommender systems based on the CI techniques. Taxonomy of the CI techniques is presented and challenges particular to the context-aware recommender systems are also discussed. Moreover, the ability of each of the CI techniques to deal with the aforesaid challenges is also highlighted. Furthermore, the strengths and weaknesses of each of the CI techniques used in context-aware recommender systems are discussed and a comparison of the techniques is also presented.", "title": "" }, { "docid": "0618529a20e00174369a05077294de5b", "text": "In this paper we present a case study of the steps leading up to the extraction of the spam bot payload found within a backdoor rootkit known as Backdoor.Rustock.B or Spam-Mailbot.c. Following the extraction of the spam module we focus our analysis on the steps necessary to decrypt the communications between the command and control server and infected hosts. Part of the discussion involves a method to extract the encryption key from within the malware binary and use that to decrypt the communications. The result is a better understanding of an advanced botnet communications scheme.", "title": "" }, { "docid": "3e488dee95be442e63d5e5edbb1e77b1", "text": "We propose a comparison between various supervised machine learning methods to predict and detect humor in dialogues. We retrieve our humorous dialogues from a very popular TV sitcom: “The Big Bang Theory”. We build a corpus where punchlines are annotated using the canned laughter embedded in the audio track. Our comparative study involves a linear-chain Conditional Random Field over a Recurrent Neural Network and a Convolutional Neural Network. Using a combination of word-level and audio frame-level features, the CNN outperforms the other methods, obtaining the best F-score of 68.5% over 66.5% by CRF and 52.9% by RNN. Our work is a starting point to developing more effective machine learning and neural network models on the humor prediction task, as well as developing machines capable in understanding humor in general.", "title": "" }, { "docid": "a9a9e3a2707d677c256695e71b42d086", "text": "Image warping is a transformation which maps all positions in one image plane to positions in a second plane. It arises in many image analysis problems, whether in order to remove optical distortions introduced by a camera or a particular viewing perspective, to register an image with a map or template, or to align two or more images. The choice of warp is a compromise between a smooth distortion and one which achieves a good match. Smoothness can be ensured by assuming a parametric form for the warp or by constraining it using di erential equations. Matching can be speci ed by points to be brought into alignment, by local measures of correlation between images, or by the coincidence of edges. Parametric and nonparametric approaches to warping, and matching criteria, are reviewed.", "title": "" }, { "docid": "7492f19ad89bab0fb9bc0c935e75543e", "text": "One of the most important questions in biological science is how a protein functions. When a protein performs its function, it undergoes regulated structural transitions. In this regard, to better understand the underlying principle of a protein function, it is desirable to monitor the dynamic evolution of the protein structure in real time. To probe fast and subtle motions of a protein in physiological conditions demands an experimental tool that is not only equipped with superb spatiotemporal resolution but also applicable to samples in solution phase. Time-resolved X-ray solution scattering (TRXSS), discussed in this Account, fits all of those requirements needed for probing the movements of proteins in aqueous solution. The technique utilizes a pump-probe scheme employing an optical pump pulse to initiate photoreactions of proteins and an X-ray probe pulse to monitor ensuing structural changes. The technical advances in ultrafast lasers and X-ray sources allow us to achieve superb temporal resolution down to femtoseconds. Because X-rays scatter off all atomic pairs in a protein, an X-ray scattering pattern provides information on the global structure of the protein with subangstrom spatial resolution. Importantly, TRXSS is readily applicable to aqueous solution samples of proteins with the aid of theoretical models and therefore is well suited for investigating structural dynamics of protein transitions in physiological conditions. In this Account, we demonstrate that TRXSS can be used to probe real-time structural dynamics of proteins in solution ranging from subtle helix movement to global conformational change. Specifically, we discuss the photoreactions of photoactive yellow protein (PYP) and homodimeric hemoglobin (HbI). For PYP, we revealed the kinetics of structural transitions among four transient intermediates comprising a photocycle and, by applying structural analysis based on ab initio shape reconstruction, showed that the signaling of PYP involves the protrusion of the N-terminus with significant increase of the overall protein size. For HbI, we elucidated the dynamics of complex allosteric transitions among transient intermediates. In particular, by applying structural refinement analysis based on rigid-body modeling, we found that the allosteric transition of HbI accompanies the rotation of quaternary structure and the contraction between two heme domains. By making use of the experimental and analysis methods presented in this Account, we envision that the TRXSS can be used to probe the structural dynamics of various proteins, allowing us to decipher the working mechanisms of their functions. Furthermore, when combined with femtosecond X-ray pulses generated from X-ray free electron lasers, TRXSS will gain access to ultrafast protein dynamics on sub-picosecond time scales.", "title": "" }, { "docid": "e64ca3fbdb3acd1ffe0fff9557ce8541", "text": "With the explosive growth of video data, content-based video analysis and management technologies such as indexing, browsing and retrieval have drawn much attention. Video shot boundary detection (SBD) is usually the first and important step for those technologies. Great efforts have been made to improve the accuracy of SBD algorithms. However, most works are based on signal rather than interpretable features of frames. In this paper, we propose a novel video shot boundary detection framework based on interpretable TAGs learned by Convolutional Neural Networks (CNNs). Firstly, we adopt a candidate segment selection to predict the positions of shot boundaries and discard most non-boundary frames. This preprocessing method can help to improve both accuracy and speed of the SBD algorithm. Then, cut transition and gradual transition detections which are based on the interpretable TAGs are conducted to identify the shot boundaries in the candidate segments. Afterwards, we synthesize the features of frames in a shot and get semantic labels for the shot. Experiments on TRECVID 2001 test data show that the proposed scheme can achieve a better performance compared with the state-of-the-art schemes. Besides, the semantic labels obtained by the framework can be used to depict the content of a shot.", "title": "" }, { "docid": "bffddca72c7e9d6e5a8c760758a98de0", "text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.", "title": "" }, { "docid": "1e8e4364427d18406594af9ad3a73a28", "text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.", "title": "" }, { "docid": "93bca110f5551d8e62dc09328de83d4f", "text": "It is well established that emotion plays a key role in human social and economic decision making. The recent literature on emotion regulation (ER), however, highlights that humans typically make efforts to control emotion experiences. This leaves open the possibility that decision effects previously attributed to acute emotion may be a consequence of acute ER strategies such as cognitive reappraisal and expressive suppression. In Study 1, we manipulated ER of laboratory-induced fear and disgust, and found that the cognitive reappraisal of these negative emotions promotes risky decisions (reduces risk aversion) in the Balloon Analogue Risk Task and is associated with increased performance in the prehunch/hunch period of the Iowa Gambling Task. In Study 2, we found that naturally occurring negative emotions also increase risk aversion in Balloon Analogue Risk Task, but the incidental use of cognitive reappraisal of emotions impedes this effect. We offer evidence that the increased effectiveness of cognitive reappraisal in reducing the experience of emotions underlies its beneficial effects on decision making.", "title": "" }, { "docid": "c47fde74be75b5e909d7657bb64bf23d", "text": "As the primary stakeholder for the Enterprise Architecture, the Chief Information Officer (CIO) is responsible for the evolution of the enterprise IT system. An important part of the CIO role is therefore to make decisions about strategic and complex IT matters. This paper presents a cost effective and scenariobased approach for providing the CIO with an accurate basis for decision making. Scenarios are analyzed and compared against each other by using a number of problem-specific easily measured system properties identified in literature. In order to test the usefulness of the approach, a case study has been carried out. A CIO needed guidance on how to assign functionality and data within four overlapping systems. The results are quantifiable and can be presented graphically, thus providing a cost-efficient and easily understood basis for decision making. The study shows that the scenario-based approach can make complex Enterprise Architecture decisions understandable for CIOs and other business-orientated stakeholders", "title": "" }, { "docid": "b56a6ce08cf00fefa1a1b303ebf21de9", "text": "Freesound is an online collaborative sound database where people with diverse interests share recorded sound samples under Creative Commons licenses. It was started in 2005 and it is being maintained to support diverse research projects and as a service to the overall research and artistic community. In this demo we want to introduce Freesound to the multimedia community and show its potential as a research resource. We begin by describing some general aspects of Freesound, its architecture and functionalities, and then explain potential usages that this framework has for research applications.", "title": "" }, { "docid": "da0d17860604269378c8649e7353ba83", "text": "Responsive, implantable stimulation devices to treat epilepsy are now in clinical trials. New evidence suggests that these devices may be more effective when they deliver therapy before seizure onset. Despite years of effort, prospective seizure prediction, which could improve device performance, remains elusive. In large part, this is explained by lack of agreement on a statistical framework for modeling seizure generation and a method for validating algorithm performance. We present a novel stochastic framework based on a three-state hidden Markov model (HMM) (representing interictal, preictal, and seizure states) with the feature that periods of increased seizure probability can transition back to the interictal state. This notion reflects clinical experience and may enhance interpretation of published seizure prediction studies. Our model accommodates clipped EEG segments and formalizes intuitive notions regarding statistical validation. We derive equations for type I and type II errors as a function of the number of seizures, duration of interictal data, and prediction horizon length and we demonstrate the model's utility with a novel seizure detection algorithm that appeared to predicted seizure onset. We propose this framework as a vital tool for designing and validating prediction algorithms and for facilitating collaborative research in this area.", "title": "" }, { "docid": "7e58396148d8e8c8ca7d3439c6b5c872", "text": "The traditional inductor-based buck converter has been the dominant design for step-down switched-mode voltage regulators for decades. Switched-capacitor (SC) DC-DC converters, on the other hand, have traditionally been used in low- power (<;10mW) and low-conversion-ratio (<;4:1) applications where neither regulation nor efficiency is critical. However, a number of SC converter topologies are very effective in their utilization of switches and passive elements, especially in relation to the ever-popular buck converters [1,2,5]. This work encompasses the complete design, fabrication, and test of a CMOS-based switched-capacitor DC-DC converter, addressing the ubiquitous 12 to 1.5V board-mounted point-of-load application. In particular, the circuit developed in this work attains higher efficiency (92% peak, and >;80% over a load range of 5mA to 1A) than surveyed competitive buck converters, while requiring less board area and less costly passive components. The topology and controller enable a wide input voltage (V!N) range of 7.5 to 13.5V with an output voltage (Vοuτ) of 1.5V Control techniques based on feedback and feedforward provide tight regulation (30mVpp) under worst-case load-step (1A) conditions. This work shows that SC converters can outperform buck converters, and thus the scope of SC converter applications can and should be expanded.", "title": "" }, { "docid": "11ce5da16cf0c0c6cfb85e0d0bbdc13e", "text": "Recently, fully-connected and convolutional neural networks have been trained to reach state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics data. For classification tasks, much of these “deep learning” models employ the softmax activation functions to learn output labels in 1-of-K format. In this paper, we demonstrate a small but consistent advantage of replacing softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. In almost all of the previous works, hidden representation of deep networks are first learned using supervised or unsupervised techniques, and then are fed into SVMs as inputs. In contrast to those models, we are proposing to train all layers of the deep networks by backpropagating gradients through the top level SVM, learning features of all layers. Our experiments show that simply replacing softmax with linear SVMs gives significant gains on datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop’s face expression recognition challenge.", "title": "" }, { "docid": "25346cdef3e97173dab5b5499c4d4567", "text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.", "title": "" }, { "docid": "2a58426989cbfab0be9e18b7ee272b0a", "text": "Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%.", "title": "" }, { "docid": "f120d34996b155a413247add6adc6628", "text": "The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large optimization problems. This method alternates between promoting the sparsity of the network and optimizing the recognition performance, which allows us to exploit the two-part structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsity-inducing penalty functions to decompose the minimization problem into sub-problems that can be solved sequentially. Applying our method to a variety of state-of-the-art CNN models, our proposed method is able to simplify the original model, generating models with less computation and fewer parameters, while maintaining and often improving generalization performance. Accomplishments on a variety of models strongly verify that our proposed ADMM-based method can be a very useful tool for simplifying and improving deep CNNs.", "title": "" } ]
scidocsrr
d75cf922e9d16103f54658fa33352c86
Distributed Data Streams
[ { "docid": "872f556cb441d9c8976e2bf03ebd62ee", "text": "Monitoring is an issue of primary concern in current and next generation networked systems. For ex, the objective of sensor networks is to monitor their surroundings for a variety of different applications like atmospheric conditions, wildlife behavior, and troop movements among others. Similarly, monitoring in data networks is critical not only for accounting and management, but also for detecting anomalies and attacks. Such monitoring applications are inherently continuous and distributed, and must be designed to minimize the communication overhead that they introduce. In this context we introduce and study a fundamental class of problems called \"thresholded counts\" where we must return the aggregate frequency count of an event that is continuously monitored by distributed nodes with a user-specified accuracy whenever the actual count exceeds a given threshold value.In this paper we propose to address the problem of thresholded counts by setting local thresholds at each monitoring node and initiating communication only when the locally observed data exceeds these local thresholds. We explore algorithms in two categories: static and adaptive thresholds. In the static case, we consider thresholds based on a linear combination of two alternate strategies, and show that there exists an optimal blend of the two strategies that results in minimum communication overhead. We further show that this optimal blend can be found using a steepest descent search. In the adaptive case, we propose algorithms that adjust the local thresholds based on the observed distributions of updated information. We use extensive simulations not only to verify the accuracy of our algorithms and validate our theoretical results, but also to evaluate the performance of our algorithms. We find that both approaches yield significant savings over the naive approach of centralized processing.", "title": "" }, { "docid": "7bdc7740124adab60c726710a003eb87", "text": "We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.", "title": "" } ]
[ { "docid": "5f5960cf7621f95687cbbac48dfdb0c5", "text": "We present the first controller that allows our small hexapod robot, RHex, to descend a wide variety of regular sized, “real-world” stairs. After selecting one of two sets of trajectories, depending on the slope of the stairs, our open-loop, clock-driven controllers require no further operator input nor task level feedback. Energetics for stair descent is captured via specific resistance values and compared to stair ascent and other behaviors. Even though the algorithms developed and validated in this paper were developed for a particular robot, the basic motion strategies, and the phase relationships between the contralateral leg pairs are likely applicable to other hexapod robots of similar size as well.", "title": "" }, { "docid": "476c1e503065f3d1638f6f2302dc6bbb", "text": "The increasing popularity and ubiquity of various large graph datasets has caused renewed interest for graph partitioning. Existing graph partitioners either scale poorly against large graphs or disregard the impact of the underlying hardware topology. A few solutions have shown that the nonuniform network communication costs may affect the performance greatly. However, none of them considers the impact of resource contention on the memory subsystems (e.g., LLC and Memory Controller) of modern multicore clusters. They all neglect the fact that the bandwidth of modern high-speed networks (e.g., Infiniband) has become comparable to that of the memory subsystems. In this paper, we provide an in-depth analysis, both theoretically and experimentally, on the contention issue for distributed workloads. We found that the slowdown caused by the contention can be as high as 11x. We then design an architecture-aware graph partitioner, Argo, to allow the full use of all cores of multicore machines without suffering from either the contention or the communication heterogeneity issue. Our experimental study showed (1) the effectiveness of Argo, achieving up to 12x speedups on three classic workloads: Breadth First Search, Single Source Shortest Path, and PageRank; and (2) the scalability of Argo in terms of both graph size and the number of partitions on two billion-edge real-world graphs.", "title": "" }, { "docid": "0186c053103d06a8ddd054c3c05c021b", "text": "The brain-gut axis is a bidirectional communication system between the central nervous system and the gastrointestinal tract. Serotonin functions as a key neurotransmitter at both terminals of this network. Accumulating evidence points to a critical role for the gut microbiome in regulating normal functioning of this axis. In particular, it is becoming clear that the microbial influence on tryptophan metabolism and the serotonergic system may be an important node in such regulation. There is also substantial overlap between behaviours influenced by the gut microbiota and those which rely on intact serotonergic neurotransmission. The developing serotonergic system may be vulnerable to differential microbial colonisation patterns prior to the emergence of a stable adult-like gut microbiota. At the other extreme of life, the decreased diversity and stability of the gut microbiota may dictate serotonin-related health problems in the elderly. The mechanisms underpinning this crosstalk require further elaboration but may be related to the ability of the gut microbiota to control host tryptophan metabolism along the kynurenine pathway, thereby simultaneously reducing the fraction available for serotonin synthesis and increasing the production of neuroactive metabolites. The enzymes of this pathway are immune and stress-responsive, both systems which buttress the brain-gut axis. In addition, there are neural processes in the gastrointestinal tract which can be influenced by local alterations in serotonin concentrations with subsequent relay of signals along the scaffolding of the brain-gut axis to influence CNS neurotransmission. Therapeutic targeting of the gut microbiota might be a viable treatment strategy for serotonin-related brain-gut axis disorders.", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "029cca0b7e62f9b52e3d35422c11cea4", "text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.", "title": "" }, { "docid": "6fb48ddc2f14cdb9371aad67e9c8abe0", "text": "Being able to predict the course of arbitrary chemical react ions is essential to the theory and applications of organic chemistry. Previous app roaches are not highthroughput, are not generalizable or scalable, or lack suffi cient data to be effective. We describe single mechanistic reactions as concerted elec tron movements from an electron orbital source to an electron orbital sink. We us e an existing rule-based expert system to derive a dataset consisting of 2,989 productive mechanistic steps and6.14 million non-productive mechanistic steps. We then pose ide nt fying productive mechanistic steps as a ranking problem: rank potent ial orbital interactions such that the top ranked interactions yield the major produc ts. The machine learning implementation follows a two-stage approach, in which w e first train atom level reactivity filters to prune94.0% of non-productive reactions with less than a 0.1% false negative rate. Then, we train an ensemble of ranking mo dels n pairs of interacting orbitals to learn a relative productivity func tion over single mechanistic reactions in a given system. Without the use of explicit t ransformation patterns, the ensemble perfectly ranks the productive mechanisms at t he top89.1% of the time, rising to99.9% of the time when top ranked lists with at most four nonproductive reactions are considered. The final system allow s multi-step reaction prediction. Furthermore, it is generalizable, making reas on ble predictions over reactants and conditions which the rule-based expert syste m does not handle.", "title": "" }, { "docid": "5ddcfb5404ceaffd6957fc53b4b2c0d8", "text": "A router's main function is to allow communication between different networks as quickly as possible and in efficient manner. The communication can be between LAN or between LAN and WAN. A firewall's function is to restrict unwanted traffic. In big networks, routers and firewall tasks are performed by different network devices. But in small networks, we want both functions on same device i.e. one single device performing both routing and firewalling. We call these devices as routing firewall. In Traditional networks, the devices are already available. But the next generation networks will be powered by Software Defined Networks. For wide adoption of SDN, we need northbound SDN applications such as routers, load balancers, firewalls, proxy servers, Deep packet inspection devices, routing firewalls running on OpenFlow based physical and virtual switches. But the SDN is still in early stage, so still there is very less availability of these applications. There already exist simple L3 Learning application which provides very elementary router function and also simple stateful firewalls providing basic access control. In this paper, we are implementing one SDN Routing Firewall Application which will perform both the routing and firewall function.", "title": "" }, { "docid": "6b1adc1da6c75f6cc0cb17820add8ef1", "text": "Many different classification tasks need to manage structured data, which are usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that the vertices/edges of each graph may change during time. Our goal is to jointly exploit structured data and temporal information through the use of a neural network model. To the best of our knowledge, this task has not been addressed using these kind of architectures. For this reason, we propose two novel approaches, which combine Long Short-Term Memory networks and Graph Convolutional Networks to learn long short-term dependencies together with graph structure. The quality of our methods is confirmed by the promising results achieved.", "title": "" }, { "docid": "e0f0ccb0e1c2f006c5932f6b373fb081", "text": "This paper proposes a methodology to be used in the segmentation of infrared thermography images for the detection of bearing faults in induction motors. The proposed methodology can be a helpful tool for preventive and predictive maintenance of the induction motor. This methodology is based on manual threshold image processing to obtain a segmentation of an infrared thermal image, which is used for the detection of critical points known as hot spots on the system under test. From these hot spots, the parameters of interest that describe the thermal behavior of the induction motor were obtained. With the segmented image, it is possible to compare and analyze the thermal conditions of the system.", "title": "" }, { "docid": "e95541d0401a196b03b94dd51dd63a4b", "text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and", "title": "" }, { "docid": "e59b203f3b104553a84603240ea467eb", "text": "Experimental art deployed in the Augmented Reality (AR) medium is contributing to a reconfiguration of traditional perceptions of interface, audience participation, and perceptual experience. Artists, critical engineers, and programmers, have developed AR in an experimental topology that diverges from both industrial and commercial uses of the medium. In a general technical sense, AR is considered as primarily an information overlay, a datafied window that situates virtual information in the physical world. In contradistinction, AR as experimental art practice activates critical inquiry, collective participation, and multimodal perception. As an emergent hybrid form that challenges and extends already established 'fine art' categories, augmented reality art deployed on Portable Media Devices (PMD’s) such as tablets & smartphones fundamentally eschews models found in the conventional 'art world.' It should not, however, be considered as inscribing a new 'model:' rather, this paper posits that the unique hybrids advanced by mobile augmented reality art–– also known as AR(t)–– are closely related to the notion of the 'machinic assemblage' ( Deleuze & Guattari 1987), where a deep capacity to re-assemble marks each new artevent. This paper develops a new formulation, the 'software assemblage,’ to explore some of the unique mixed reality situations that AR(t) has set in motion.", "title": "" }, { "docid": "b71197073ea33bb8c61973e8cd7d2775", "text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.", "title": "" }, { "docid": "2a1d77e0c5fe71c3c5eab995828ef113", "text": "Local modular control (LMC) is an approach to the supervisory control theory (SCT) of discrete-event systems that exploits the modularity of plant and specifications. Recently, distinguishers and approximations have been associated with SCT to simplify modeling and reduce synthesis effort. This paper shows how advantages from LMC, distinguishers, and approximations can be combined. Sufficient conditions are presented to guarantee that local supervisors computed by our approach lead to the same global closed-loop behavior as the solution obtained with the original LMC, in which the modeling is entirely handled without distinguishers. A further contribution presents a modular way to design distinguishers and a straightforward way to construct approximations to be used in local synthesis. An example of manufacturing system illustrates our approach. Note to Practitioners—Distinguishers and approximations are alternatives to simplify modeling and reduce synthesis cost in SCT, grounded on the idea of event-refinements. However, this approach may entangle the modular structure of a plant, so that LMC does not keep the same efficiency. This paper shows how distinguishers and approximations can be locally combined such that synthesis cost is reduced and LMC advantages are preserved.", "title": "" }, { "docid": "9b0114697dc6c260610d0badc1d7a2a4", "text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.", "title": "" }, { "docid": "7bfbcf62f9ff94e80913c73e069ace26", "text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.", "title": "" }, { "docid": "90d9360a3e769311a8d7611d8c8845d9", "text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.", "title": "" }, { "docid": "6ddb475ef1529ab496ab9f40dc51cb99", "text": "While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.", "title": "" }, { "docid": "9d5c258e4a2d315d3e462ab333f3a6df", "text": "The modern smart phone and car concepts provide a fertile ground for new location-aware applications, ranging from traffic management to social services. While the functionality is partly implemented at the mobile terminal, there is a rising need for efficient backend processing of high-volume, high update rate location streams. It is in this environment that geofencing, the detection of objects traversing virtual fences, is becoming a universal primitive required by an ever-growing number of applications. To satisfy the functionality and performance requirements of large-scale geofencing applications, we present in this work a backend system for indexing massive quantities of mobile objects and geofences. Our system runs on a cluster of servers, achieving a throughput of location updates that scales linearly with number of machines. The key ingredients to achieve a high performance are a specialized spatial index, a dynamic caching mechanism, and a load-sharing principle that reduces communication overhead to a minimum and enables a shared-nothing architecture. The throughput of the spatial index as well as the performance of the overall system are demonstrated by experiments using simulations of large-scale geofencing applications.", "title": "" } ]
scidocsrr
9bb02d8f26d1a73a2e11ef6a8c6fe2b9
A CPPS Architecture approach for Industry 4.0
[ { "docid": "13c0f622205a67e2d026e9eb097df0e3", "text": "This paper presents an approach to how existing production systems that are not Industry 4.0-ready can be expanded to participate in an Industry 4.0 factory. Within this paper, a concept is presented how production systems can be discovered and included into an Industry 4.0 (I4.0) environment, even though they did not have I4.0interfaces when they have been manufactured. The concept is based on a communication gateway and an information server. Besides the concept itself, this paper presents a validation that demonstrates applicability of the developed concept.", "title": "" } ]
[ { "docid": "45a92ab90fabd875a50229921e99dfac", "text": "This paper describes an empirical study of the problems encountered by 32 blind users on the Web. Task-based user evaluations were undertaken on 16 websites, yielding 1383 instances of user problems. The results showed that only 50.4% of the problems encountered by users were covered by Success Criteria in the Web Content Accessibility Guidelines 2.0 (WCAG 2.0). For user problems that were covered by WCAG 2.0, 16.7% of websites implemented techniques recommended in WCAG 2.0 but the techniques did not solve the problems. These results show that few developers are implementing the current version of WCAG, and even when the guidelines are implemented on websites there is little indication that people with disabilities will encounter fewer problems. The paper closes by discussing the implications of this study for future research and practice. In particular, it discusses the need to move away from a problem-based approach towards a design principle approach for web accessibility.", "title": "" }, { "docid": "59db435e906db2c198afdc5cc7c7de2c", "text": "Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a non-local means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise.", "title": "" }, { "docid": "1b29aa20e82dba0992634d3a178ad0c5", "text": "This paper presents the approach developed for the partial MASPS level document DO-344 “Operational and Functional Requirements and Safety Objectives” for the UAS standards. Previous RTCA1 work led to the production of an Operational Services Environment Description document, from which operational requirements were extracted and refined. Following the principles described in the Department of Defense Architecture Framework, the overall UAS architecture and major interfaces were defined. Interacting elements included the unmanned aircraft (airborne component), the ground control station (ground component), the Air Traffic Control (ATC), the Air Traffic Service besides ATC, other traffic in the NAS, and the UAS ground support. Furthering the level of details, a functional decomposition was produced prior to the allocation onto the UAS architecture. These functions cover domains including communication, control, navigation, surveillance, and health monitoring. The communication function addressed all elements in the UAS connected with external interfaces: the airborne component, the ground component, the ATC, the other traffic and the ground support. The control function addressed the interface between the ground control station and the unmanned aircraft for the purpose of flying in the NAS. The navigation function covered the capability to determine and fly a trajectory using conventional and satellite based navigation means. The surveillance function addressed the capability to detect and avoid collisions with hazards, including other traffic, terrain and obstacles, and weather. Finally, the health monitoring function addressed the capability to oversee UAS systems, probe for their status and feedback issues related to degradation or loss of performance. An additional function denoted `manage' was added to the functional decomposition to complement the heath monitoring coverage and included manual modes for the operation of the UAS.", "title": "" }, { "docid": "f8c6906f4d0deb812e42aaaff457a6d9", "text": "By the early 1900s, Euro-Americans had extirpated gray wolves (Canis lupus) from most of the contiguous United States. Yellowstone National Park was not immune to wolf persecution and by the mid-1920s they were gone. After seven decades of absence in the park, gray wolves were reintroduced in 1995–1996, again completing the large predator guild (Smith et al. 2003). Yellowstone’s ‘‘experiment in time’’ thus provides a rare opportunity for studying potential cascading effects associated with the extirpation and subsequent reintroduction of an apex predator. Wolves represent a particularly important predator of large mammalian prey in northern hemisphere ecosystems by virtue of their group hunting and year-round activity (Peterson et al. 2003) and can have broad top-down effects on the structure and functioning of these systems (Miller et al. 2001, Soulé et al. 2003, Ray et al. 2005). If a tri-trophic cascade involving wolves–elk (Cervus elaphus)–plants is again underway in northern Yellowstone, theory would suggest two primary mechanisms: (1) density mediation through prey mortality and (2) trait mediation involving changes in prey vigilance, habitat use, and other behaviors (Brown et al. 1999, Berger 2010). Both predator-caused reductions in prey numbers and fear responses they elicit in prey can lead to cascading trophic-level effects across a wide range of biomes (Beschta and Ripple 2009, Laundré et al. 2010, Terborgh and Estes 2010). Thus, the occurrence of a trophic cascade could have important implications not only to the future structure and functioning of northern Yellowstone’s ecosystems but also for other portions of the western United States where wolves have been reintroduced, are expanding their range, or remain absent. However, attempting to identify the occurrence of a trophic cascade in systems with large mammalian predators, as well as the relative importance of density and behavioral mediation, represents a continuing scientific challenge. In Yellowstone today, there is an ongoing effort by various researchers to evaluate ecosystem processes in the park’s two northern ungulate winter ranges: (1) the ‘‘Northern Range’’ along the northern edge of the park (NRC 2002, Barmore 2003) and (2) the ‘‘Upper Gallatin Winter Range’’ along the northwestern corner of the park (Ripple and Beschta 2004b). Previous studies in northern Yellowstone have generally found that elk, in the absence of wolves, caused a decrease in aspen (Populus tremuloides) recruitment (i.e., the growth of seedlings or root sprouts above the browse level of elk). Within this context, Kauffman et al. (2010) initiated a study to provide additional understanding of factors such as elk density, elk behavior, and climate upon historical and contemporary patterns of aspen recruitment in the park’s Northern Range. Like previous studies, Kauffman et al. (2010) concluded that, irrespective of historical climatic conditions, elk have had a major impact on long-term aspen communities after the extirpation of wolves. But, unlike other studies that have seen improvement in the growth or recruitment of young aspen and other browse species in recent years, Kauffman et al. (2010) concluded in their Abstract: ‘‘. . . our estimates of relative survivorship of young browsable aspen indicate that aspen are not currently recovering in Yellowstone, even in the presence of a large wolf population.’’ In the interest of clarifying the potential role of wolves on woody plant community dynamics in Yellowstone’s northern winter ranges, we offer several counterpoints to the conclusions of Kauffman et al. (2010). We do so by readdressing several tasks identified in their Introduction (p. 2744): (1) the history of aspen recruitment failure, (2) contemporary aspen recruitment, and (3) aspen recruitment and predation risk. Task 1 covers the period when wolves were absent from Yellowstone and tasks 2 and 3 focus on the period when wolves were again present. We also include some closing comments regarding trophic cascades and ecosystem recovery. 1. History of aspen recruitment failure.—Although records of wolf and elk populations in northern Yellowstone are fragmentary for the early 1900s, the Northern Range elk population averaged ;10 900 animals (7.3 elk/km; Fig. 1A) as the last wolves were being removed in the mid 1920s. Soon thereafter increased browsing by elk of aspen and other woody species was noted in northern Yellowstone’s winter ranges (e.g., Rush 1932, Lovaas 1970). In an attempt to reduce the effects this large herbivore was having on vegetation, soils, and wildlife habitat in the Northern Manuscript received 13 January 2011; revised 10 June 2011; accepted 20 June 2011. Corresponding Editor: C. C. Wilmers. 1 Department of Forest Ecosystems and Society, Oregon State University, Corvallis, Oregon 97331 USA. 2 E-mail: [email protected]", "title": "" }, { "docid": "2d822e022363b371f62a803d79029f09", "text": "AIM\nTo explore the relationship between sources of stress and psychological burn-out and to consider the moderating and mediating role played sources of stress and different coping resources on burn-out.\n\n\nBACKGROUND\nMost research exploring sources of stress and coping in nursing students construes stress as psychological distress. Little research has considered those sources of stress likely to enhance well-being and, by implication, learning.\n\n\nMETHOD\nA questionnaire was administered to 171 final year nursing students. Questions were asked which measured sources of stress when rated as likely to contribute to distress (a hassle) and rated as likely to help one achieve (an uplift). Support, control, self-efficacy and coping style were also measured, along with their potential moderating and mediating effect on burn-out.\n\n\nFINDINGS\nThe sources of stress likely to lead to distress were more often predictors of well-being than sources of stress likely to lead to positive, eustress states. However, placement experience was an important source of stress likely to lead to eustress. Self-efficacy, dispositional control and support were other important predictors. Avoidance coping was the strongest predictor of burn-out and, even if used only occasionally, it can have an adverse effect on burn-out. Initiatives to promote support and self-efficacy are likely to have the more immediate benefits in enhancing student well-being.\n\n\nCONCLUSION\nNurse educators need to consider how course experiences contribute not just to potential distress but to eustress. How educators interact with their students and how they give feedback offers important opportunities to promote self-efficacy and provide valuable support. Peer support is a critical coping resource and can be bolstered through induction and through learning and teaching initiatives.", "title": "" }, { "docid": "14b7c4f8a3fa7089247f1d4a26186c5d", "text": "System Dynamics is often used for dealing with dynamically complex issues that are also uncertain. This paper reviews how uncertainty is dealt with in System Dynamics modeling, where uncertainties are located in models, which types of uncertainties are dealt with, and which levels of uncertainty could be handled. Shortcomings of System Dynamics and its practice in dealing with uncertainty are distilled from this review and reframed as opportunities. Potential opportunities for dealing with uncertainty in System Dynamics that are discussed here include (i) dealing explicitly with difficult sorts of uncertainties, (ii) using multi-model approaches for dealing with alternative assumptions and multiple perspectives, (iii) clearly distinguishing sensitivity analysis from uncertainty analysis and using them for different purposes, (iv) moving beyond invariant model boundaries, (v) using multi-method approaches, advanced techniques and new tools, and (vi) further developing and using System Dynamics strands for dealing with deep uncertainty.", "title": "" }, { "docid": "8582c4a040e4dec8fd141b00eaa45898", "text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.", "title": "" }, { "docid": "dc2c952b5864a167c19b34be6db52389", "text": "Data mining is popularly used to combat frauds because of its effectiveness. It is a well-defined procedure that takes data as input and produces models or patterns as output. Neural network, a data mining technique was used in this study. The design of the neural network (NN) architecture for the credit card detection system was based on unsupervised method, which was applied to the transactions data to generate four clusters of low, high, risky and high-risk clusters. The self-organizing map neural network (SOMNN) technique was used for solving the problem of carrying out optimal classification of each transaction into its associated group, since a prior output is unknown. The receiver-operating curve (ROC) for credit card fraud (CCF) detection watch detected over 95% of fraud cases without causing false alarms unlike other statistical models and the two-stage clusters. This shows that the performance of CCF detection watch is in agreement with other detection software, but performs better.", "title": "" }, { "docid": "aaf1aac789547c1bf2f918368b43c955", "text": "Music is full of structure, including sections, sequences of distinct musical textures, and the repetition of phrases or entire sections. The analysis of music audio relies upon feature vectors that convey information about music texture or pitch content. Texture generally refers to the average spectral shape and statistical fluctuation, often reflecting the set of sounding instruments, e.g. strings, vocal, or drums. Pitch content reflects melody and harmony, which is often independent of texture. Structure is found in several ways. Segment boundaries can be detected by observing marked changes in locally averaged texture. Similar sections of music can be detected by clustering segments with similar average textures. The repetition of a sequence of music often marks a logical segment. Repeated phrases and hierarchical structures can be discovered by finding similar sequences of feature vectors within a piece of music. Structure analysis can be used to construct music summaries and to assist music browsing. Introduction Probably everyone would agree that music has structure, but most of the interesting musical information that we perceive lies hidden below the complex surface of the audio signal. From this signal, human listeners perceive vocal and instrumental lines, orchestration, rhythm, harmony, bass lines, and other features. Unfortunately, music audio signals have resisted our attempts to extract this kind of information. Researchers are making progress, but so far, computers have not come near to human levels of performance in detecting notes, processing rhythms, or identifying instruments in a typical (polyphonic) music audio texture. On a longer time scale, listeners can hear structure including the chorus and verse in songs, sections in other types of music, repetition, and other patterns. One might think that without the reliable detection and identification of short-term features such as notes and their sources, that it would be impossible to deduce any information whatsoever about even higher levels of abstraction. Surprisingly, it is possible to automatically detect a great deal of information concerning music structure. For example, it is possible to label the structure of a song as AABA, meaning that opening material (the “A” part) is repeated once, then contrasting material (the “B” part) is played, and then the opening material is played again at the end. This structural description may be deduced from low-level audio signals. Consequently, a computer might locate the “chorus” of a song without having any representation of the melody or rhythm that characterizes the chorus. Underlying almost all work in this area is the concept that structure is induced by the repetition of similar material. This is in contrast to, say, speech recognition, where there is a common understanding of words, their structure, and their meaning. A string of unique words can be understood using prior knowledge of the language. Music, however, has no language or dictionary (although there are certainly known forms and conventions). In general, structure can only arise in music through repetition or systematic transformations of some kind. Repetition implies there is some notion of similarity. Similarity can exist between two points in time (or at least two very short time intervals), similarity can exist between two sequences over longer time intervals, and similarity can exist between the longer-term statistical behaviors of acoustical features. Different approaches to similarity will be described. Similarity can be used to segment music: contiguous regions of similar music can be grouped together into segments. Segments can then be grouped into clusters. The segmentation of a musical work and the grouping of these segments into clusters is a form of analysis or “explanation” of the music. R. Dannenberg and M. Goto Music Structure 16 April 2005 2 Features and Similarity Measures A variety of approaches are used to measure similarity, but it should be clear that a direct comparison of the waveform data or individual samples will not be useful. Large differences in waveforms can be imperceptible, so we need to derive features of waveform data that are more perceptually meaningful and compare these features with an appropriate measure of similarity. Feature Vectors for Spectrum, Texture, and Pitch Different features emphasize different aspects of the music. For example, mel-frequency cepstral coefficients (MFCCs) seem to work well when the general shape of the spectrum but not necessarily pitch information is important. MFCCs generally capture overall “texture” or timbral information (what instruments are playing in what general pitch range), but some pitch information is captured, and results depend upon the number of coefficients used as well as the underlying musical signal. When pitch is important, e.g. when searching for similar harmonic sequences, the chromagram is effective. The chromagram is based on the idea that tones separated by octaves have the same perceived value of chroma (Shepard 1964). Just as we can describe the chroma aspect of pitch, the short term frequency spectrum can be restructured into the chroma spectrum by combining energy at different octaves into just one octave. The chroma vector is a discretized version of the chroma spectrum where energy is summed into 12 log-spaced divisions of the octave corresponding to pitch classes (C, C#, D, ... B). By analogy to the spectrogram, the discrete chromagram is a sequence of chroma vectors. It should be noted that there are several variations of the chromagram. The computation typically begins with a short-term Fourier transform (STFT) which is used to compute the magnitude spectrum. There are different ways to “project” this onto the 12-element chroma vector. Each STFT bin can be mapped directly to the most appropriate chroma vector element (Bartsch and Wakefield 2001), or the STFT bin data can be interpolated or windowed to divide the bin value among two neighboring vector elements (Goto 2003a). Log magnitude values can be used to emphasize the presence of low-energy harmonics. Values can also be averaged, summed, or the vector can be computed to conserve the total energy. The chromagram can also be computed by using the Wavelet transform. Regardless of the exact details, the primary attraction of the chroma vector is that, by ignoring octaves, the vector is relatively insensitive to overall spectral energy distribution and thus to timbral variations. However, since fundamental frequencies and lower harmonics of tones feature prominently in the calculation of the chroma vector, it is quite sensitive to pitch class content, making it ideal for the detection of similar harmonic sequences in music. While MFCCs and chroma vectors can be calculated from a single short term Fourier transform, features can also be obtained from longer sequences of spectral frames. Tzanetakis and Cook (1999) use means and variances of a variety of features in a one second window. The features include the spectral centroid, spectral rolloff, spectral flux, and RMS energy. Peeters, La Burthe, and Rodet (2002) describe “dynamic” features, which model the variation of the short term spectrum over windows of about one second. In this approach, the audio signal is passed through a bank of Mel filters. The time-varying magnitudes of these filter outputs are each analyzed by a short term Fourier transform. The resulting set of features, the Fourier coefficients from each Mel filter output, is large, so a supervised learning scheme is used to find features that maximize the mutual information between feature values and hand-labeled music structures. Measures of Similarity Given a feature vector such as the MFCC or chroma vector, some measure of similarity is needed. One possibility is to compute the (dis)similarity using the Euclidean distance between feature vectors. Euclidean distance will be dependent upon feature magnitude, which is often a measure of the overall R. Dannenberg and M. Goto Music Structure 16 April 2005 3 music signal energy. To avoid giving more weight to the louder moments of music, feature vectors can be normalized, for example, to a mean of zero and a standard deviation of one or to a maximum element of one. Alternatively, similarity can be measured using the scalar (dot) product of the feature vectors. This measure will be larger when feature vectors have a similar direction. As with Euclidean distance, the scalar product will also vary as a function of the overall magnitude of the feature vectors. If the dot product is normalized by the feature vector magnitudes, the result is equal to the cosine of the angle between the vectors. If the feature vectors are first normalized to have a mean of zero, the cosine angle is equivalent to the correlation, another measure that has been used with success. Lu, Wang, and Zhang (Lu, Wang, and Zhang 2004) use a constant-Q transform (CQT), and found that CQT outperforms chroma and MFCC features using a cosine distance measure. They also introduce a “structure-based” distance measure that takes into account the harmonic structure of spectra to emphasize pitch similarity over timbral similarity, resulting in additional improvement in a music structure analysis task. Similarity can be calculated between individual feature vectors, as suggested above, but similarity can also be computed over a window of feature vectors. The measure suggested by Foote (1999) is vector correlation:", "title": "" }, { "docid": "fe012505cc7a2ea36de01fc92924a01a", "text": "The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (< 10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.", "title": "" }, { "docid": "1fa6ee7cf37d60c182aa7281bd333649", "text": "To cope with the explosion of information in mathematics and physics, we need a unified mathematical language to integrate ideas and results from diverse fields. Clifford Algebra provides the key to a unifled Geometric Calculus for expressing, developing, integrating and applying the large body of geometrical ideas running through mathematics and physics.", "title": "" }, { "docid": "1b4019d0f2eb9e392b5dfeea8370b625", "text": "Intellectual capital is becoming the preeminent resource for creating economic wealth. Tangible assets such as property, plant, and equipment continue to be important factors in the production of both goods and services. However, their relative importance has decreased through time as the importance of intangible, knowledge-based assets has increased. This shift in importance has raised a number of accounting questions critical for managing assets such as brand names, trade secrets, production processes, distribution channels, and work-related competencies. This paper develops a working definition of intellectual capital and a framework for identifying and classifying the various components of intellectual capital. In addition, methods of measuring intellectual capital at both the individual-component and organization levels are presented. This provides an exploratory foundation for accounting systems and processes useful for meaningful management of intellectual assets. INTELLECTUAL CAPITAL AND ITS MEASUREMENT", "title": "" }, { "docid": "fcd30a667cb2f4e89d9174cc37ac698c", "text": "v TABLE OF CONTENTS vii", "title": "" }, { "docid": "4d91ac570bec700f78521754c7e5d0ce", "text": "Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. The basic concept of CAD is to provide a computer output as a second opinion to assist radiologists' image interpretation by improving the accuracy and consistency of radiological diagnosis and also by reducing the image reading time. In this article, a number of CAD schemes are presented, with emphasis on potential clinical applications. These schemes include: (1) detection and classification of lung nodules on digital chest radiographs; (2) detection of nodules in low dose CT; (3) distinction between benign and malignant nodules on high resolution CT; (4) usefulness of similar images for distinction between benign and malignant lesions; (5) quantitative analysis of diffuse lung diseases on high resolution CT; and (6) detection of intracranial aneurysms in magnetic resonance angiography. Because CAD can be applied to all imaging modalities, all body parts and all kinds of examinations, it is likely that CAD will have a major impact on medical imaging and diagnostic radiology in the 21st century.", "title": "" }, { "docid": "6d882c210047b3851cb0514083cf448e", "text": "Child sexual abuse is a serious global problem and has gained public attention in recent years. Due to the popularity of digital cameras, many perpetrators take images of their sexual activities with child victims. Traditionally, it was difficult to use cutaneous vascular patterns for forensic identification, because they were nearly invisible in color images. Recently, this limitation was overcome using a computational method based on an optical model to uncover vein patterns from color images for forensic verification. This optical-based vein uncovering (OBVU) method is sensitive to the power of the illuminant and does not utilize skin color in images to obtain training parameters to optimize the vein uncovering performance. Prior publications have not included an automatic vein matching algorithm for forensic identification. As a result, the OBVU method only supported manual verification. In this paper, we propose two new schemes to overcome limitations in the OBVU method. Specifically, a color optimization scheme is used to derive the range of biophysical parameters to obtain training parameters and an automatic intensity adjustment scheme is used to enhance the robustness of the vein uncovering algorithm. We also developed an automatic matching algorithm for vein identification. This algorithm can handle rigid and non-rigid deformations and has an explicit pruning function to remove outliers in vein patterns. The proposed algorithms were examined on a database with 300 pairs of color and near infrared (NIR) images collected from the forearms of 150 subjects. The experimental results are encouraging and indicate that the proposed vein uncovering algorithm performs better than the OBVU method and that the uncovered patterns can potentially be used for automatic criminal and victim identification.", "title": "" }, { "docid": "8f7d2c365f6272a7e681a48b500299c7", "text": "In today's world, opinions and reviews accessible to us are one of the most critical factors in formulating our views and influencing the success of a brand, product or service. With the advent and growth of social media in the world, stakeholders often take to expressing their opinions on popular social media, namely Twitter. While Twitter data is extremely informative, it presents a challenge for analysis because of its humongous and disorganized nature. This paper is a thorough effort to dive into the novel domain of performing sentiment analysis of people's opinions regarding top colleges in India. Besides taking additional preprocessing measures like the expansion of net lingo and removal of duplicate tweets, a probabilistic model based on Bayes' theorem was used for spelling correction, which is overlooked in other research studies. This paper also highlights a comparison between the results obtained by exploiting the following machine learning algorithms: Naïve Bayes and Support Vector Machine and an Artificial Neural Network model: Multilayer Perceptron. Furthermore, a contrast has been presented between four different kernels of SVM: RBF, linear, polynomial and sigmoid.", "title": "" }, { "docid": "98ca1c0100115646bb14a00f19c611a5", "text": "The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various", "title": "" }, { "docid": "8410b8b76ab690ed4389efae15608d13", "text": "The most natural way to speed-up the training of large networks is to use dataparallelism on multiple GPUs. To scale Stochastic Gradient (SG) based methods to more processors, one need to increase the batch size to make full use of the computational power of each GPU. However, keeping the accuracy of network with increase of batch size is not trivial. Currently, the state-of-the art method is to increase Learning Rate (LR) proportional to the batch size, and use special learning rate with \"warm-up\" policy to overcome initial optimization difficulty. By controlling the LR during the training process, one can efficiently use largebatch in ImageNet training. For example, Batch-1024 for AlexNet and Batch-8192 for ResNet-50 are successful applications. However, for ImageNet-1k training, state-of-the-art AlexNet only scales the batch size to 1024 and ResNet50 only scales it to 8192. The reason is that we can not scale the learning rate to a large value. To enable large-batch training to general networks or datasets, we propose Layer-wise Adaptive Rate Scaling (LARS). LARS LR uses different LRs for different layers based on the norm of the weights (||w||) and the norm of the gradients (||∇w||). By using LARS algoirithm, we can scale the batch size to 32768 for ResNet50 and 8192 for AlexNet. Large batch can make full use of the system’s computational power. For example, batch-4096 can achieve 3× speedup over batch-512 for ImageNet training by AlexNet model on a DGX-1 station (8 P100 GPUs).", "title": "" }, { "docid": "fc12ac921348a77714bff6ec39b0e052", "text": "For decades, nurses (RNs) have identified barriers to providing the optimal pain management that children deserve; yet no studies were found in the literature that assessed these barriers over time or across multiple pediatric hospitals. The purpose of this study was to reassess barriers that pediatric RNs perceive, and how they describe optimal pain management, 3 years after our initial assessment, collect quantitative data regarding barriers identified through comments during our initial assessment, and describe any changes over time. The Modified Barriers to Optimal Pain Management survey was used to measure barriers in both studies. RNs were invited via e-mail to complete an electronic survey. Descriptive and inferential statistics were used to compare results over time. Four hundred forty-two RNs responded, representing a 38% response rate. RNs continue to describe optimal pain management most often in terms of patient comfort and level of functioning. While small changes were seen for several of the barriers, the most significant barriers continued to involve delays in the availability of medications, insufficient physician medication orders, and insufficient orders and time allowed to pre-medicate patients before procedures. To our knowledge, this is the first study to reassess RNs' perceptions of barriers to pediatric pain management over time. While little change was seen in RNs' descriptions of optimal pain management or in RNs' perceptions of barriers, no single item was rated as more than a moderate barrier to pain management. The implications of these findings are discussed in the context of improvement strategies.", "title": "" } ]
scidocsrr
28ce6219b0284ea5fe22f5219f92a165
Competitive Data Trading in Wireless-Powered Internet of Things (IoT) Crowdsensing Systems with Blockchain
[ { "docid": "87b7b05c6af2fddb00f7b1d3a60413c1", "text": "Mobile crowdsensing (MCS) is a human-driven Internet of Things service empowering citizens to observe the phenomena of individual, community, or even societal value by sharing sensor data about their environment while on the move. Typical MCS service implementations utilize cloud-based centralized architectures, which consume a lot of computational resources and generate significant network traffic, both in mobile networks and toward cloud-based MCS services. Mobile edge computing (MEC) is a natural choice to distribute MCS solutions by moving computation to network edge, since an MEC-based architecture enables significant performance improvements due to the partitioning of problem space based on location, where real-time data processing and aggregation is performed close to data sources. This in turn reduces the associated traffic in mobile core and will facilitate MCS deployments of massive scale. This paper proposes an edge computing architecture adequate for massive scale MCS services by placing key MCS features within the reference MEC architecture. In addition to improved performance, the proposed architecture decreases privacy threats and permits citizens to control the flow of contributed sensor data. It is adequate for both data analytics and real-time MCS scenarios, in line with the 5G vision to integrate a huge number of devices and enable innovative applications requiring low network latency. Our analysis of service overhead introduced by distributed architecture and service reconfiguration at network edge performed on real user traces shows that this overhead is controllable and small compared with the aforementioned benefits. When enhanced by interoperability concepts, the proposed architecture creates an environment for the establishment of an MCS marketplace for bartering and trading of both raw sensor data and aggregated/processed information.", "title": "" }, { "docid": "2c226c7be6acf725190c72a64bfcdf91", "text": "The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.", "title": "" } ]
[ { "docid": "bd44d77e255837497d5026e87a46548d", "text": "Social media technologies let people connect by creating and sharing content. We examine the use of Twitter by famous people to conceptualize celebrity as a practice. On Twitter, celebrity is practiced through the appearance and performance of ‘backstage’ access. Celebrity practitioners reveal what appears to be personal information to create a sense of intimacy between participant and follower, publicly acknowledge fans, and use language and cultural references to create affiliations with followers. Interactions with other celebrity practitioners and personalities give the impression of candid, uncensored looks at the people behind the personas. But the indeterminate ‘authenticity’ of these performances appeals to some audiences, who enjoy the game playing intrinsic to gossip consumption. While celebrity practice is theoretically open to all, it is not an equalizer or democratizing discourse. Indeed, in order to successfully practice celebrity, fans must recognize the power differentials intrinsic to the relationship.", "title": "" }, { "docid": "36a538b833de4415d12cd3aa5103cf9b", "text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.", "title": "" }, { "docid": "51c82ab631167a61e553e1ab8e34a385", "text": "The social and political context of sexual identity development in the United States has changed dramatically since the mid twentieth century. Same-sex attracted individuals have long needed to reconcile their desire with policies of exclusion, ranging from explicit outlaws on same-sex activity to exclusion from major social institutions such as marriage. This paper focuses on the implications of political exclusion for the life course of individuals with same-sex desire through the analytic lens of narrative. Using illustrative evidence from a study of autobiographies of gay men spanning a 60-year period and a study of the life stories of contemporary same-sex attracted youth, we detail the implications of historic silence, exclusion, and subordination for the life course.", "title": "" }, { "docid": "ee9730fa0fde945d70130bcf33960608", "text": "An operational definition offered in this paper posits learning as a multi-dimensional and multi-phase phenomenon occurring when individuals attempt to solve what they view as a problem. To model someone’s learning accordingly to the definition, it suffices to characterize a particular sequence of that person’s disequilibrium–equilibrium phases in terms of products of a particular mental act, the characteristics of the mental act inferred from the products, and intellectual and psychological needs that instigate or result from these phases. The definition is illustrated by analysis of change occurring in three thinking-aloud interviews with one middle-school teacher. The interviews were about the same task: “Make up a word problem whose solution may be found by computing 4/5 divided by 2/3.” © 2010 Elsevier Inc. All rights reserved. An operational definition is a showing of something—such as a variable, term, or object—in terms of the specific process or set of validation tests used to determine its presence and quantity. Properties described in this manner must be publicly accessible so that persons other than the definer can independently measure or test for them at will. An operational definition is generally designed to model a conceptual definition (Wikipedia)", "title": "" }, { "docid": "e3b7d2c4cd3e3d860db8d4751c9eed25", "text": "While recommender systems tell users what items they might like, explanations of recommendations reveal why they might like them. Explanations provide many benefits, from improving user satisfaction to helping users make better decisions. This paper introduces tagsplanations, which are explanations based on community tags. Tagsplanations have two key components: tag relevance, the degree to which a tag describes an item, and tag preference, the user's sentiment toward a tag. We develop novel algorithms for estimating tag relevance and tag preference, and we conduct a user study exploring the roles of tag relevance and tag preference in promoting effective tagsplanations. We also examine which types of tags are most useful for tagsplanations.", "title": "" }, { "docid": "5fe43f0b23b0cfd82b414608e60db211", "text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.", "title": "" }, { "docid": "7252835fc4cc75ed0dd74a6b12da822a", "text": "Mammalian physiology and behavior are regulated by an internal time-keeping system, referred to as circadian rhythm. The circadian timing system has a hierarchical organization composed of the master clock in the suprachiasmatic nucleus (SCN) and local clocks in extra-SCN brain regions and peripheral organs. The circadian clock molecular mechanism involves a network of transcription-translation feedback loops. In addition to the clinical association between circadian rhythm disruption and mood disorders, recent studies have suggested a molecular link between mood regulation and circadian rhythm. Specifically, genetic deletion of the circadian nuclear receptor Rev-erbα induces mania-like behavior caused by increased midbrain dopaminergic (DAergic) tone at dusk. The association between circadian rhythm and emotion-related behaviors can be applied to pathological conditions, including neurodegenerative diseases. In Parkinson's disease (PD), DAergic neurons in the substantia nigra pars compacta progressively degenerate leading to motor dysfunction. Patients with PD also exhibit non-motor symptoms, including sleep disorder and neuropsychiatric disorders. Thus, it is important to understand the mechanisms that link the molecular circadian clock and brain machinery in the regulation of emotional behaviors and related midbrain DAergic neuronal circuits in healthy and pathological states. This review summarizes the current literature regarding the association between circadian rhythm and mood regulation from a chronobiological perspective, and may provide insight into therapeutic approaches to target psychiatric symptoms in neurodegenerative diseases involving circadian rhythm dysfunction.", "title": "" }, { "docid": "112026af056b3350eceed0c6d0035260", "text": "This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman filters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and efficiency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.", "title": "" }, { "docid": "4e3f56861c288cca8191a11d2125ede0", "text": "A top-hat monopole Yagi antenna is presented to produce an end-fire radiation beam. The antenna has an extremely low profile and wide operating bandwidth. It consists of a folded top-hat monopole as the driven element and four short-circuited top-hat monopoles as parasitic elements. A broad bandwidth can be achieved by adjusting the different resonances introduced by the driven and parasitic elements. A prototype operating at the UHF band (f0 = 550 MHz) is fabricated and tested. Measured results show that a fractional bandwidth (|S11| <; -10 dB) of 20.5% is obtained while the antenna height is only λ0/28 at the center frequency.", "title": "" }, { "docid": "70fea2037a5ca55718512c2f2243d387", "text": "Malicious modification of hardware during design or fabrication has emerged as a major security concern. Such tampering (also referred to as Hardware Trojan) causes an integrated circuit (IC) to have altered functional behavior, potentially with disastrous consequences in safety-critical applications. Conventional design-time verification and post-manufacturing testing cannot be readily extended to detect hardware Trojans due to their stealthy nature, inordinately large number of possible instances and large variety in structure and operating mode. In this paper, we analyze the threat posed by hardware Trojans and the methods of deterring them. We present a Trojan taxonomy, models of Trojan operations and a review of the state-of-the-art Trojan prevention and detection techniques. Next, we discuss the major challenges associated with this security concern and future research needs to address them.", "title": "" }, { "docid": "1db450f3e28907d6940c87d828fc1566", "text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.", "title": "" }, { "docid": "f1755e987da9d915eb9969e7b1eeb8dc", "text": "Recent advances in distant-talking ASR research have confirmed that speech enhancement is an essential technique for improving the ASR performance, especially in the multichannel scenario. However, speech enhancement inevitably distorts speech signals, which can cause significant degradation when enhanced signals are used as training data. Thus, distant-talking ASR systems often resort to using the original noisy signals as training data and the enhanced signals only at test time, and give up on taking advantage of enhancement techniques in the training stage. This paper proposes to make use of enhanced features in the student-teacher learning paradigm. The enhanced features are used as input to a teacher network to obtain soft targets, while a student network tries to mimic the teacher network's outputs using the original noisy features as input, so that speech enhancement is implicitly performed within the student network. Compared with conventional student-teacher learning, which uses a better network as teacher, the proposed self-supervised method uses better (enhanced) inputs to a teacher. This setup matches the above scenario of making use of enhanced features in network training. Experiments with the CHiME-4 challenge real dataset show significant ASR improvements with an error reduction rate of 12% in the single-channel track and 15% in the 2-channel track, respectively, by using 6-channel beamformed features for the teacher model.", "title": "" }, { "docid": "cebdedb344f2ba7efb95c2933470e738", "text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks", "title": "" }, { "docid": "141ecc1fe0c33bfd647e4d62956f0212", "text": "a Emerging Markets Research Centre (EMaRC), School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK b Section of Information & Communication Technology, Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands c Nottingham Business School, Nottingham Trent University, UK d School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK e School of Management, Swansea University Bay Campus, Fabian Way, Crymlyn Burrows, Swansea, SA1 8EN, Wales, UK", "title": "" }, { "docid": "dc7262a2e046bd5f633e9f5fbb5f1830", "text": "We investigate a dual-annular-ring CMUT array configuration for forward-looking intravascular ultrasound (FL-IVUS) imaging. The array consists of separate, concentric transmit and receive ring arrays built on the same silicon substrate. This configuration has the potential for independent optimization of each array and uses the silicon area more effectively without any particular drawback. We designed and fabricated a 1 mm diameter test array which consists of 24 transmit and 32 receive elements. We investigated synthetic phased array beamforming with a non-redundant subset of transmit-receive element pairs of the dual-annular-ring array. For imaging experiments, we designed and constructed a programmable FPGA-based data acquisition and phased array beamforming system. Pulse-echo measurements along with imaging simulations suggest that dual-ring-annular array should provide performance suitable for real-time FL-IVUS applications", "title": "" }, { "docid": "39539ad490065e2a81b6c07dd11643e5", "text": "Stock prices are formed based on short and/or long-term commercial and trading activities that reflect different frequencies of trading patterns. However, these patterns are often elusive as they are affected by many uncertain political-economic factors in the real world, such as corporate performances, government policies, and even breaking news circulated across markets. Moreover, time series of stock prices are non-stationary and non-linear, making the prediction of future price trends much challenging. To address them, we propose a novel State Frequency Memory (SFM) recurrent network to capture the multi-frequency trading patterns from past market data to make long and short term predictions over time. Inspired by Discrete Fourier Transform (DFT), the SFM decomposes the hidden states of memory cells into multiple frequency components, each of which models a particular frequency of latent trading pattern underlying the fluctuation of stock price. Then the future stock prices are predicted as a nonlinear mapping of the combination of these components in an Inverse Fourier Transform (IFT) fashion. Modeling multi-frequency trading patterns can enable more accurate predictions for various time ranges: while a short-term prediction usually depends on high frequency trading patterns, a long-term prediction should focus more on the low frequency trading patterns targeting at long-term return. Unfortunately, no existing model explicitly distinguishes between various frequencies of trading patterns to make dynamic predictions in literature. The experiments on the real market data also demonstrate more competitive performance by the SFM as compared with the state-of-the-art methods.", "title": "" }, { "docid": "7e846a58cbf49231c41789d1190bce67", "text": "We study the problem of zero-shot classification in which we don't have labeled data in target domain. Existing approaches learn a model from source domain and apply it without adaptation to target domain, which is prone to domain shift problem. To solve the problem, we propose a novel Learning Discriminative Instance Attribute(LDIA) method. Specifically, we learn a projection matrix for both the source and target domain jointly and also use prototype in the attribute space to regularise the learned projection matrix. Therefore, the information of the source domain can be effectively transferred to the target domain. The experimental results on two benchmark datasets demonstrate that the proposed LDIA method exceeds competitive approaches for zero-shot classification task.", "title": "" }, { "docid": "a2f062482157efb491ca841cc68b7fd3", "text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.", "title": "" }, { "docid": "1e0eade3cc92eb79160aeac35a3a26d1", "text": "Global environmental concerns and the escalating demand for energy, coupled with steady progress in renewable energy technologies, are opening up new opportunities for utilization of renewable energy vailable online 12 January 2011", "title": "" }, { "docid": "9b575699e010919b334ac3c6bc429264", "text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we", "title": "" } ]
scidocsrr
e8bbe717500b0fb201be13a68456ecd4
Understanding the Digital Marketing Environment with KPIs and Web Analytics
[ { "docid": "0994065c757a88373a4d97e5facfee85", "text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.", "title": "" } ]
[ { "docid": "76efa42a492d8eb36b82397e09159c30", "text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.", "title": "" }, { "docid": "1d26fc3a5f07e7ea678753e7171846c4", "text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.", "title": "" }, { "docid": "711daac04e27d0a413c99dd20f6f82e1", "text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.", "title": "" }, { "docid": "b93455e6b023910bf7711d56d16f62a2", "text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.", "title": "" }, { "docid": "6a8afd6713425e7dc047da08d7c4c773", "text": "We present the first linear time (1 + /spl epsiv/)-approximation algorithm for the k-means problem for fixed k and /spl epsiv/. Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.", "title": "" }, { "docid": "93133be6094bba6e939cef14a72fa610", "text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.", "title": "" }, { "docid": "3688c987419daade77c44912fbc72ecf", "text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.", "title": "" }, { "docid": "566a2b2ff835d10e0660fb89fd6ae618", "text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).", "title": "" }, { "docid": "72345bf404d21d0f7aa1e54a5710674c", "text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.", "title": "" }, { "docid": "b23d73e29fc205df97f073eb571a2b47", "text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5cd726f49dd0cb94fe7d2d724da9f215", "text": "We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.", "title": "" }, { "docid": "dc18c0e5737b3d641418e5b33dd3f0e7", "text": "Millimeter wave (mmWave) communications have recently attracted large research interest, since the huge available bandwidth can potentially lead to the rates of multiple gigabit per second per user. Though mmWave can be readily used in stationary scenarios, such as indoor hotspots or backhaul, it is challenging to use mmWave in mobile networks, where the transmitting/receiving nodes may be moving, channels may have a complicated structure, and the coordination among multiple nodes is difficult. To fully exploit the high potential rates of mmWave in mobile networks, lots of technical problems must be addressed. This paper presents a comprehensive survey of mmWave communications for future mobile networks (5G and beyond). We first summarize the recent channel measurement campaigns and modeling results. Then, we discuss in detail recent progresses in multiple input multiple output transceiver design for mmWave communications. After that, we provide an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity. Finally, the progresses in the standardization and deployment of mmWave for mobile networks are discussed.", "title": "" }, { "docid": "b5b8ae3b7b307810e1fe39630bc96937", "text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.", "title": "" }, { "docid": "3e7941e6d2e5c2991030950d2a13d48f", "text": "Mobile edge cloud (MEC) is a model for enabling on-demand elastic access to, or an interaction with a shared pool of reconfigurable computing resources such as servers, storage, peer devices, applications, and services, at the edge of the wireless network in close proximity to mobile users. It overcomes some obstacles of traditional central clouds by offering wireless network information and local context awareness as well as low latency and bandwidth conservation. This paper presents a comprehensive survey of MEC systems, including the concept, architectures, and technical enablers. First, the MEC applications are explored and classified based on different criteria, the service models and deployment scenarios are reviewed and categorized, and the factors influencing the MEC system design are discussed. Then, the architectures and designs of MEC systems are surveyed, and the technical issues, existing solutions, and approaches are presented. The open challenges and future research directions of MEC are further discussed.", "title": "" }, { "docid": "8c662416784ddaf8dae387926ba0b17c", "text": "Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations.", "title": "" }, { "docid": "9f40a57159a06ecd9d658b4d07a326b5", "text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011", "title": "" }, { "docid": "4129d2906d3d3d96363ff0812c8be692", "text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.", "title": "" }, { "docid": "8e28f1561b3a362b2892d7afa8f2164c", "text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.", "title": "" }, { "docid": "acfdfe2de61ec2697ef865b1e5a42721", "text": "Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem.", "title": "" } ]
scidocsrr
957bd9c647fc04f4bec7e4ecf3b6f048
Distributed Federated Learning for Ultra-Reliable Low-Latency Vehicular Communications
[ { "docid": "f69e0ee2fa795e020c36dd3389ce93da", "text": "Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.", "title": "" }, { "docid": "244b583ff4ac48127edfce77bc39e768", "text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.", "title": "" }, { "docid": "ea87bfc0d6086e367e8950b445529409", "text": " Queue stability (Chapter 2.1)  Scheduling for stability, capacity regions (Chapter 2.3)  Linear programs (Chapter 2.3, Chapter 3)  Energy optimality (Chapter 3.2)  Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6)  Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3)  Inequality constraints and virtual queues (Chapter 4.4)  Drift-plus-penalty algorithm (Chapter 4.5)  Performance and delay tradeoffs (Chapter 3.2, 4.5)  Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)", "title": "" } ]
[ { "docid": "9fd2ec184fa051070466f61845e6df60", "text": "Buildings across the world contribute significantly to the overall energy consumption and are thus stakeholders in grid operations. Towards the development of a smart grid, utilities and governments across the world are encouraging smart meter deployments. High resolution (often at every 15 minutes) data from these smart meters can be used to understand and optimize energy consumptions in buildings. In addition to smart meters, buildings are also increasingly managed with Building Management Systems (BMS) which control different sub-systems such as lighting and heating, ventilation, and air conditioning (HVAC). With the advent of these smart meters, increased usage of BMS and easy availability and widespread installation of ambient sensors, there is a deluge of building energy data. This data has been leveraged for a variety of applications such as demand response, appliance fault detection and optimizing HVAC schedules. Beyond the traditional use of such data sets, they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency. Effective use of this data entails several critical areas from sensing to decision making and participatory involvement of occupants. Picking from wide literature in building energy efficiency, we identify five crust areas (also referred to as 5 Is) for realizing data driven energy efficiency in buildings : i) instrument optimally; ii) interconnect sub-systems; iii) inferred decision making; iv) involve occupants and v) intelligent operations. We classify prior work as per these 5 Is and discuss challenges, opportunities and applications across them. Building upon these 5 Is we discuss a well studied problem in building energy efficiency non-intrusive load monitoring (NILM) and how research in this area spans across the 5 Is.", "title": "" }, { "docid": "c8e5257c2ed0023dc10786a3071c6e6a", "text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "title": "" }, { "docid": "ccb6da03ae9520de4843082ac0583978", "text": "Zero-shot learning (ZSL) aims to recognize unseen image categories by learning an embedding space between image and semantic representations. For years, among existing works, it has been the center task to learn the proper mapping matrices aligning the visual and semantic space, whilst the importance to learn discriminative representations for ZSL is ignored. In this work, we retrospect existing methods and demonstrate the necessity to learn discriminative representations for both visual and semantic instances of ZSL. We propose an end-to-end network that is capable of 1) automatically discovering discriminative regions by a zoom network; and 2) learning discriminative semantic representations in an augmented space introduced for both user-defined and latent attributes. Our proposed method is tested extensively on two challenging ZSL datasets, and the experiment results show that the proposed method significantly outperforms state-of-the-art methods.", "title": "" }, { "docid": "27b2f82780c4113bb8a234cac0cf38f9", "text": "Conventional robot manipulators have singularities in their workspaces and constrained spatial movements. Flexible and soft robots provide a unique solution to overcome this limitation. Flexible robot arms have biologically inspired characteristics as flexible limbs and redundant degrees of freedom. From these special characteristics, flexible manipulators are able to develop abilities such as bend, stretch and adjusting stiffness to traverse a complex maze. Many researchers are working to improve capabilities of flexible arms by improving the number of degrees of freedoms and their methodologies. The proposed flexible robot arm is composed of multiple sections and each section contains three similar segments and a base segment. These segments act as the backbone of the basic structure and each section can be controlled by changing the length of three control wires. These control wires pass through each segment and are held in place by springs. This design provides each segment with 2 DOF. The proposed system single section can be bent 90° with respective to its centre axis. Kinematics of the flexible robot is derived with respect to the base segment.", "title": "" }, { "docid": "ba391ddf37a4757bc9b8d9f4465a66dc", "text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "9341757e2403b6fd63738f8ec0d33a15", "text": "The objective of this study was to review the literature with respect to the root and canal systems in the maxillary first molar. Root anatomy studies were divided into laboratory studies (in vitro), clinical root canal system anatomy studies (in vivo) and clinical case reports of anomalies. Over 95% (95.9%) of maxillary first molars had three roots and 3.9% had two roots. The incidence of fusion of any two or three roots was approximately 5.2%. Conical and C-shaped roots and canals were rarely found (0.12%). This review contained the most data on the canal morphology of the mesiobuccal root with a total of 8399 teeth from 34 studies. The incidence of two canals in the mesiobuccal root was 56.8% and of one canal was 43.1% in a weighted average of all reported studies. The incidence of two canals in the mesiobuccal root was higher in laboratory studies (60.5%) compared to clinical studies (54.7%). Less variation was found in the distobuccal and palatal roots and the results were reported from fourteen studies consisting of 2576 teeth. One canal was found in the distobuccal root in 98.3% of teeth whereas the palatal root had one canal in over 99% of the teeth studied.", "title": "" }, { "docid": "0caac54baab8117c8b25b04bd7460f48", "text": "ÐThis paper presents a new variational framework for detecting and tracking multiple moving objects in image sequences. Motion detection is performed using a statistical framework for which the observed interframe difference density function is approximated using a mixture model. This model is composed of two components, namely, the static (background) and the mobile (moving objects) one. Both components are zero-mean and obey Laplacian or Gaussian law. This statistical framework is used to provide the motion detection boundaries. Additionally, the original frame is used to provide the moving object boundaries. Then, the detection and the tracking problem are addressed in a common framework that employs a geodesic active contour objective function. This function is minimized using a gradient descent method, where a flow deforms the initial curve towards the minimum of the objective function, under the influence of internal and external image dependent forces. Using the level set formulation scheme, complex curves can be detected and tracked while topological changes for the evolving curves are naturally managed. To reduce the computational cost required by a direct implementation of the level set formulation scheme, a new approach named Hermes is proposed. Hermes exploits aspects from the well-known front propagation algorithms (Narrow Band, Fast Marching) and compares favorably to them. Very promising experimental results are provided using real video sequences. Index TermsÐFront propagation, geodesic active contours, level set theory, motion detection, tracking.", "title": "" }, { "docid": "5637bed8be75d7e79a2c2adb95d4c28e", "text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.", "title": "" }, { "docid": "1667c7e872bac649051bb45fc85e9921", "text": "Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biométrie identification is possible because a person's movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions-all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification.", "title": "" }, { "docid": "12819e1ad6ca9b546e39ed286fe54d23", "text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.", "title": "" }, { "docid": "3f2aa3cde019d56240efba61d52592a4", "text": "Drivers like global competition, advances in technology, and new attractive market opportunities foster a process of servitization and thus the search for innovative service business models. To facilitate this process, different methods and tools for the development of new business models have emerged. Nevertheless, business model approaches are missing that enable the representation of cocreation as one of the most important service-characteristics. Rooted in a cumulative research design that seeks to advance extant business model representations, this goal is to be closed by the Service Business Model Canvas (SBMC). This contribution comprises the application of thinking-aloud protocols for the formative evaluation of the SBMC. With help of industry experts and academics with experience in the service sector and business models, the usability is tested and implications for its further development derived. Furthermore, this study provides empirically based insights for the design of service business model representation that can facilitate the development of future business models.", "title": "" }, { "docid": "612416cb82559f94d8d4b888bad17ba1", "text": "Future plastic materials will be very different from those that are used today. The increasing importance of sustainability promotes the development of bio-based and biodegradable polymers, sometimes misleadingly referred to as 'bioplastics'. Because both terms imply \"green\" sources and \"clean\" removal, this paper aims at critically discussing the sometimes-conflicting terminology as well as renewable sources with a special focus on the degradation of these polymers in natural environments. With regard to the former we review innovations in feedstock development (e.g. microalgae and food wastes). In terms of the latter, we highlight the effects that polymer structure, additives, and environmental variables have on plastic biodegradability. We argue that the 'biodegradable' end-product does not necessarily degrade once emitted to the environment because chemical additives used to make them fit for purpose will increase the longevity. In the future, this trend may continue as the plastics industry also is expected to be a major user of nanocomposites. Overall, there is a need to assess the performance of polymer innovations in terms of their biodegradability especially under realistic waste management and environmental conditions, to avoid the unwanted release of plastic degradation products in receiving environments.", "title": "" }, { "docid": "61ffc67f0e242afd8979d944cbe2bff4", "text": "Diprosopus is a rare congenital malformation associated with high mortality. Here, we describe a patient with diprosopus, multiple life-threatening anomalies, and genetic mutations. Prenatal diagnosis and counseling made a beneficial impact on the family and medical providers in the care of this case.", "title": "" }, { "docid": "a0cba009ac41ab57bdea75c1676715a6", "text": "These notes provide a brief introduction to the theory of noncooperative differential games. After the Introduction, Section 2 reviews the theory of static games. Different concepts of solution are discussed, including Pareto optima, Nash and Stackelberg equilibria, and the co-co (cooperative-competitive) solutions. Section 3 introduces the basic framework of differential games for two players. Open-loop solutions, where the controls implemented by the players depend only on time, are considered in Section 4. It is shown that Nash and Stackelberg solutions can be computed by solving a two-point boundary value problem for a system of ODEs, derived from the Pontryagin maximum principle. Section 5 deals with solutions in feedback form, where the controls are allowed to depend on time and also on the current state of the system. In this case, the search for Nash equilibrium solutions usually leads to a highly nonlinear system of HamiltonJacobi PDEs. In dimension higher than one, this system is generically not hyperbolic and the Cauchy problem is thus ill posed. Due to this instability, closed-loop solutions to differential games are mainly considered in the special case with linear dynamics and quadratic costs. In Section 6, a game in continuous time is approximated by a finite sequence of static games, by a time discretization. Depending of the type of solution adopted in each static game, one obtains different concept of solutions for the original differential game. Section 7 deals with differential games in infinite time horizon, with exponentially discounted payoffs. In this case, the search for Nash solutions in feedback form leads to a system of time-independent H-J equations. Section 8 contains a simple example of a game with infinitely many players. This is intended to convey a flavor of the newly emerging theory of mean field games. Modeling issues, and directions of current research, are briefly discussed in Section 9. Finally, the Appendix collects background material on multivalued functions, selections and fixed point theorems, optimal control theory, and hyperbolic PDEs.", "title": "" }, { "docid": "c576c08aa746ea30a528e104932047a6", "text": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.", "title": "" }, { "docid": "946c6b2dc7bd102597bd96a0d4a4f46e", "text": "Due to the non-stationarity nature and poor signal-to-noise ratio (SNR) of brain signals, repeated time-consuming calibration is one of the biggest problems for today's brain-computer interfaces (BCIs). In order to reduce calibration time, many transfer learning methods have been proposed to extract discriminative or stationary information from other subjects or prior sessions for target classification task. In this paper, we review the existing transfer learning methods used for BCI classification problems and organize them into three cases based on different transfer strategies. Besides, we list the datasets used in these BCI studies.", "title": "" }, { "docid": "7b5f0c88eaf8c23b8e2489e140d0022f", "text": "Deep learning has been integrated into several existing left ventricle (LV) endocardium segmentation methods to yield impressive accuracy improvements. However, challenges remain for segmentation of LV epicardium due to its fuzzier appearance and complications from the right ventricular insertion points. Segmenting the myocardium collectively (i.e., endocardium and epicardium together) confers the potential for better segmentation results. In this work, we develop a computational platform based on deep learning to segment the whole LV myocardium simultaneously from a cardiac magnetic resonance (CMR) image. The deep convolutional network is constructed using Caffe platform, which consists of 6 convolutional layers, 2 pooling layers, and 1 de-convolutional layer. A preliminary result with Dice metric of 0.75±0.04 is reported on York MR dataset. While in its current form, our proposed one-step deep learning method cannot compete with state-of-art myocardium segmentation methods, it delivers promising first pass segmentation results.", "title": "" }, { "docid": "cbbb2c0a9d2895c47c488bed46d8f468", "text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.", "title": "" }, { "docid": "ba0726778e194159d916c70f5f4cedc9", "text": "We present a system for multimedia event detection. The developed system characterizes complex multimedia events based on a large array of multimodal features, and classifies unseen videos by effectively fusing diverse responses. We present three major technical innovations. First, we explore novel visual and audio features across multiple semantic granularities, including building, often in an unsupervised manner, mid-level and high-level features upon low-level features to enable semantic understanding. Second, we show a novel Latent SVM model which learns and localizes discriminative high-level concepts in cluttered video sequences. In addition to improving detection accuracy beyond existing approaches, it enables a unique summary for every retrieval by its use of high-level concepts and temporal evidence localization. The resulting summary provides some transparency into why the system classified the video as it did. Finally, we present novel fusion learning algorithms and our methodology to improve fusion learning under limited training data condition. Thorough evaluation on a large TRECVID MED 2011 dataset showcases the benefits of the presented system.", "title": "" } ]
scidocsrr
fd09feceaa6b93f1012b29372557d155
A Quality Framework for Agile Requirements: A Practitioner's Perspective
[ { "docid": "b2af36852b94260f692241eef651cc88", "text": "This paper describes empirical research into agile requirements engineering (RE) practices. Based on an analysis of data collected in 16 US software development organizations, we identify six agile practices. We also identify seven challenges that are created by the use of these practices. We further analyse how this collection of practices helps mitigate some, while exacerbating other risks in RE. We provide a framework for evaluating the impact and appropriateness of agile RE practices by relating them to RE risks. Two risks that are intractable by agile RE practices emerge from the analysis. First, problems with customer inability and a lack of concurrence among customers significantly impact agile development. Second, risks associated with the neglecting non-functional requirements such as security and scalability are a serious concern. Developers should carefully evaluate the risk factors in their project environment to understand whether the benefits of agile RE practices outweigh the costs imposed by the challenges.", "title": "" } ]
[ { "docid": "f9a9ed5f618e11ed2d10083954ac5e9f", "text": "This study utilized a mixed methods approach to examine the feasibility and acceptability of group compassion focused therapy for adults with intellectual disabilities (CFT-ID). Six participants with mild ID participated in six sessions of group CFT, specifically adapted for adults with ID. Session-by-session feasibility and acceptability measures suggested that participants understood the group content and process and experienced group sessions and experiential practices as helpful and enjoyable. Thematic analysis of focus groups identified three themes relating to (1) direct experiences of the group, (2) initial difficulties in being self-compassionate and (3) positive emotional changes. Pre- and post-group outcome measures indicated significant reductions in both self-criticism and unfavourable social comparisons. Results suggest that CFT can be adapted for individuals with ID and provide preliminary evidence that people with ID and psychological difficulties may experience a number of benefits from this group intervention.", "title": "" }, { "docid": "62c000009e8b50ece91049f8276c7323", "text": "Mike Thelwall, Kevan Buckley Statistical Cybermetrics Research Group, School of Technology, University of Wolverhampton, Wulfruna Street, Wolverhampton WV1 1SB, UK. E-mail: [email protected], [email protected] Tel: +44 1902 321470 Fax: +44 1902 321478 General sentiment analysis for the social web has become increasingly useful to shed light on the role of emotion in online communication and offline events in both academic research and data journalism. Nevertheless, existing general purpose social web sentiment analysis algorithms may not be optimal for texts focussed around specific topics. This article introduces two new methods, mood setting and lexicon extension, to improve the accuracy of topic-specific lexical sentiment strength detection for the social web. Mood setting allows the topic mood to determine the default polarity for ostensibly neutral expressive text. Topic-specific lexicon extension involves adding topic-specific words to the default general sentiment lexicon. Experiments with eight data sets show that both methods can improve sentiment analysis performance in corpora and are recommended when the topic focus is tightest.", "title": "" }, { "docid": "3da0597ce369afdec1716b1fedbce7d1", "text": "We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness.", "title": "" }, { "docid": "24880289ca2b6c31810d28c8363473b3", "text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "title": "" }, { "docid": "05edb3594eab114a5015557d4260e3db", "text": "In CMOS circuits, the reduction of the threshold voltage due to voltage scaling leads to increase in subthreshold leakage current and hence static power dissipation. We propose a novel technique called LECTOR for designing CMOS gates which significantly cuts down the leakage current without increasing the dynamic power dissipation. In the proposed technique, we introduce two leakage control transistors (a p-type and a n-type) within the logic gate for which the gate terminal of each leakage control transistor (LCT) is controlled by the source of the other. In this arrangement, one of the LCTs is always \"near its cutoff voltage\" for any input combination. This increases the resistance of the path from V/sub dd/ to ground, leading to significant decrease in leakage currents. The gate-level netlist of the given circuit is first converted into a static CMOS complex gate implementation and then LCTs are introduced to obtain a leakage-controlled circuit. The significant feature of LECTOR is that it works effectively in both active and idle states of the circuit, resulting in better leakage reduction compared to other techniques. Further, the proposed technique overcomes the limitations posed by other existing methods for leakage reduction. Experimental results indicate an average leakage reduction of 79.4% for MCNC'91 benchmark circuits.", "title": "" }, { "docid": "9c9e3261c293aedea006becd2177a6d5", "text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.", "title": "" }, { "docid": "3b1addbef50c5020b88ae2e55c197085", "text": "In this paper, we present a novel wide-band envelope detector comprising a fully-differential operational transconductance amplifier (OTA), a full-wave rectifier and a peak detector. To enhance the frequency performance of the envelop detector, we utilize a gyrator-C active inductor load in the OTA for wider bandwidth. Additionally, it is shown that the high-speed rectifier of the envelope detector requires high bias current instead of the sub-threshold bias condition. The experimental results show that the proposed envelope detector can work from 100-Hz to 1.6-GHz with an input dynamic range of 50-dB at 100-Hz and 40-dB at 1.6-GHz, respectively. The envelope detector was fabricated on the TSMC 0.18-um CMOS process with an active area of 0.652 mm2.", "title": "" }, { "docid": "20d1cb8d2f416c1dc07e5a34c2ec43ba", "text": "Significant research and development of algorithms in intelligent transportation has grabbed more attention in recent years. An automated, fast, accurate and robust vehicle plate recognition system has become need for traffic control and law enforcement of traffic regulations; and the solution is ANPR. This paper is dedicated on an improved technique of OCR based license plate recognition using neural network trained dataset of object features. A blended algorithm for recognition of license plate is proposed and is compared with existing methods for improve accuracy. The whole system can be categorized under three major modules, namely License Plate Localization, Plate Character Segmentation, and Plate Character Recognition. The system is simulated on 300 national and international motor vehicle LP images and results obtained justifies the main requirement.", "title": "" }, { "docid": "24d2ad857f66f9bd32405bf1de7cadcf", "text": "Evidence linked exposure to internet appearance-related sites to weight dissatisfaction, drive for thinness, increased internalisation of thin ideals, and body surveillance with Facebook users having significantly higher scores on body image concern measures (Tiggemann & Miller, 2010, Tiggemann & Slater, 2013). This study explored the impacts of social media on the body image of young adults aged 18-25 years. A total of 300 students from a Victorian university completed a survey including questions about the use of social media and 2 measures of body image: The Objectified Body Consciousness and both female and male version of the Sociocultural Attitudes towards Appearance Questionnaire 3. Results showed participants mostly used Facebook to keep in touch with friends and family. While using social media, they felt pressure to lose weight, look more attractive or muscular, and to change their appearance. Correlations were found between Instagram and concerns with body image and body surveillance, between Pinterest and body shame and appearance control beliefs and between Facebook and Pinterest and perceived pressure. Findings contribute to the growing body of knowledge about the influence of social media on body image and new information for the development of social media literacy programs addressing negative body image.", "title": "" }, { "docid": "c9275012c275a0288849e6eb8e7156c4", "text": "Evaluation of patients with shoulder disorders often presents challenges. Among the most troublesome are revision surgery in patients with massive rotator cuff tear, atraumatic shoulder instability, revision arthroscopic stabilization surgery, adhesive capsulitis, and bicipital and subscapularis injuries. Determining functional status is critical before considering surgical options in the patient with massive rotator cuff tear. When nonsurgical treatment of atraumatic shoulder stability is not effective, inferior capsular shift is the treatment of choice. Arthroscopic revision of failed arthroscopic shoulder stabilization procedures may be undertaken when bone and tissue quality are good. Arthroscopic release is indicated when idiopathic adhesive capsulitis does not respond to nonsurgical treatment; however, results of both nonsurgical and surgical treatment of posttraumatic and postoperative adhesive capsulitis are often disappointing. Patients not motivated to perform the necessary postoperative therapy following subscapularis repair are best treated with arthroscopic débridement and biceps tenotomy.", "title": "" }, { "docid": "0e6e0c304c78205585f86a070e8843c1", "text": "We explore the problem of reconstructing an image from a bag of square, non-overlapping image patches, the jigsaw puzzle problem. Completing jigsaw puzzles is challenging and requires expertise even for humans, and is known to be NP-complete. We depart from previous methods that treat the problem as a constraint satisfaction problem and develop a graphical model to solve it. Each patch location is a node and each patch is a label at nodes in the graph. A graphical model requires a pairwise compatibility term, which measures an affinity between two neighboring patches, and a local evidence term, which we lack. This paper discusses ways to obtain these terms for the jigsaw puzzle problem. We evaluate several patch compatibility metrics, including the natural image statistics measure, and experimentally show that the dissimilarity-based compatibility – measuring the sum-of-squared color difference along the abutting boundary – gives the best results. We compare two forms of local evidence for the graphical model: a sparse-and-accurate evidence and a dense-and-noisy evidence. We show that the sparse-and-accurate evidence, fixing as few as 4 – 6 patches at their correct locations, is enough to reconstruct images consisting of over 400 patches. To the best of our knowledge, this is the largest puzzle solved in the literature. We also show that one can coarsely estimate the low resolution image from a bag of patches, suggesting that a bag of image patches encodes some geometric information about the original image.", "title": "" }, { "docid": "fd16f89880144a8f23c74527980de4d0", "text": "In this paper, a multilevel inverter system for an open-end winding induction motor drive is described. Multilevel inversion is achieved by feeding an open-end winding induction motor with two two-level inverters in cascade (equivalent to a three-level inverter) from one end and a single two-level inverter from the other end of the motor. The combined inverter system with open-end winding induction motor produces voltage space-vector locations identical to a six-level inverter. A total of 512 space-vector combinations are available in the proposed scheme, distributed over 91 space-vector locations. The proposed inverter drive scheme is capable of producing a multilevel pulsewidth-modulation (PWM) waveform for the phase voltage ranging from a two-level waveform to a six-level waveform depending on the modulation range. A space-vector PWM scheme for the proposed drive is implemented using a 1.5-kW induction motor with open-end winding structure.", "title": "" }, { "docid": "4cce019f5f4c4cfa934e599ddf9137cb", "text": "Many distributed graph processing frameworks have emerged for helping doing large scale data analysis for many applications including social network and data mining. The existing frameworks usually focus on the system scalability without consideration of local computing performance. We have observed two locality issues which greatly influence the local computing performance in existing systems. One is the locality of the data associated with each vertex/edge. The data are often considered as a logical undividable unit and put into continuous memory. However, it is quite common that for some computing steps, only some portions of data (called as some properties) are needed. The current data layout incurs large amount of interleaved memory access. The other issue is their execution engine applies computation at a granularity of vertex. Making optimization for the locality of source vertex of each edge will often hurt the locality of target vertex or vice versa. We have built a distributed graph processing framework called Photon to address the above issues. Photon employs Property View to store the same type of property for all vertices and edges together. This will improve the locality while doing computation with a portion of properties. Photon also employs an edge-centric execution engine with Hilbert-Order that improve the locality during computation. We have evaluated Photon with five graph applications using five real-world graphs and compared it with four existing systems. The results show that Property View and edge-centric execution design improve graph processing by 2.4X.", "title": "" }, { "docid": "eebf03df49eb4a99f61d371e059ef43e", "text": "In theoretical cognitive science, there is a tension between highly structured models whose parameters have a direct psychological interpretation and highly complex, general-purpose models whose parameters and representations are difficult to interpret. The former typically provide more insight into cognition but the latter often perform better. This tension has recently surfaced in the realm of educational data mining, where a deep learning approach to estimating student proficiency, termed deep knowledge tracing or DKT [17], has demonstrated a stunning performance advantage over the mainstay of the field, Bayesian knowledge tracing or BKT [3].", "title": "" }, { "docid": "91e2dadb338fbe97b009efe9e8f60446", "text": "An efficient smoke detection algorithm on color video sequences obtained from a stationary camera is proposed. Our algorithm considers dynamic and static features of smoke and is composed of basic steps: preprocessing; slowly moving areas and pixels segmentation in a current input frame based on adaptive background subtraction; merge slowly moving areas with pixels into blobs; classification of the blobs obtained before. We use adaptive background subtraction at a stage of moving detection. Moving blobs classification is based on optical flow calculation, Weber contrast analysis and takes into account primary direction of smoke propagation. Real video surveillance sequences were used for smoke detection with utilization our algorithm. A set of experimental results is presented in the paper.", "title": "" }, { "docid": "78189afece831eefc22f506def3a0d0a", "text": "The increasing number and range of automation functions along with the decrease of qualified personal makes an upgraded engineering process necessary. This article gives a general overview of one approach, called the Automation of Automation, i.e. the automated execution of human tasks related to the engineering process of automation systems. Starting with a definition and a model describing the typical engineering process, some solutions for the needed framework are presented. Occurring problems within parts of this process model are discussed and possible solutions are presented.", "title": "" }, { "docid": "616280024e85264e542df70d1e7766cf", "text": "Cable-driven parallel manipulators (CDPMs) are a special class of parallel manipulators that are driven by cables instead of rigid links. So CDPMs have a light-weight structure with large reachable workspace. The aim of this paper is to provide the kinematic analysis and the design optimization of a cable-driven 2-DOF module, comprised of a passive universal joint, for a reconfigurable system. This universal joint module can be part of a modular reconfigurable system where various cable-driven modules can be attached serially into many different configurations. Based on a symmetric design approach, six topological configurations are enumerated with three or four cables arrangements. With a variable constrained axis, the structure matrix of the universal joint has to be formulated with respect to the intermediate frame. The orientation workspace of the universal joint is a submanifold of SO(3). Therefore, the workspace representation is a plane in R2. With the integral measure for the submanifold expressed as a cosine function of one of the angles of rotation, an equivolumetric method is employed to numerically calculate the workspace volume. The orientation workspace volume of the universal joint module is found to be 2π. Optimization results show that the 4-1 cable arrangement produces the largest workspace with better Global Conditioning Index.", "title": "" }, { "docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a", "text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.", "title": "" }, { "docid": "792694fbea0e2e49a454ffd77620da47", "text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical", "title": "" }, { "docid": "7f56cb986ec4a6022883595ff0d8faa5", "text": "Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when the data are unbalanced, which is common in many medical imaging applications, such as lesion segmentation, where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased toward the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem, including two-step training, sample re-weighting, balanced sampling, and more recently, similarity loss functions and focal loss. In this paper, we fully trained convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better tradeoff between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using $F_\\beta $ scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both the challenges. We compared the performance of our network trained with $F_\\beta $ loss, focal loss, and generalized Dice loss functions. Through September 2018, our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on $F_\\beta $ scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patch-wise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.", "title": "" } ]
scidocsrr
d5a7fc54969981109e428edd33917bae
Vehicular cloud computing: A survey
[ { "docid": "2171c57b911161d805ffc08fbe02f92a", "text": "The past decade has witnessed a growing interest in vehicular networking and its vast array of potential applications. Increased wireless accessibility of the Internet from vehicles has triggered the emergence of vehicular safety applications, locationspecific applications, and multimedia applications. Recently, Professor Olariu and his coworkers have promoted the vision of Vehicular Clouds (VCs), a non-trivial extension, along several dimensions, of conventional Cloud Computing. In a VC, the under-utilized vehicular resources including computing power, storage and Internet connectivity can be shared between drivers or rented out over the Internet to various customers, very much as conventional cloud resources are. The goal of this chapter is to introduce and review the challenges and opportunities offered by what promises to be the Next Paradigm Shift:From Vehicular Networks to Vehicular Clouds. Specifically, the chapter introduces VCs and discusses some of their distinguishing characteristics and a number of fundamental research challenges. To illustrate the huge array of possible applications of the powerful VC concept, a number of possible application scenarios are presented and discussed. As the adoption and success of the vehicular cloud concept is inextricably related to security and privacy issues, a number of security and privacy issues specific to vehicular clouds are discussed as well. Additionally, data aggregation and empirical results are presented. Mobile Ad Hoc Networking: Cutting Edge Directions, Second Edition. Edited by Stefano Basagni, Marco Conti, Silvia Giordano, and Ivan Stojmenovic. © 2013 by The Institute of Electrical and Electronics Engineers, Inc. Published 2013 by John Wiley & Sons, Inc.", "title": "" } ]
[ { "docid": "d8079ff945eb0bd85da940f168409d00", "text": "Cuckoo search is a modern bio-inspired metaheuristic that has successfully been used to solve different real world optimization problems. In particular, it has exhibited rapid convergence reaching considerable good results. In this paper, we employ this technique to solve the set covering problem, which is a well-known optimization benchmark. We illustrate interesting experimental results where the proposed algorithm is able to obtain several global optimums for different set covering instances from the OR-Library.", "title": "" }, { "docid": "2065faf3e72a8853dd6cbba1daf9c64a", "text": "One of a good overview all the output neurons. The fixed point attractors have resulted in order to the attractor furthermore. As well as memory classification and all the basic ideas. Introducing the form of strange attractors or licence agreement may be fixed point! The above with input produces and the techniques brought from one of cognitive processes. The study of cpgs is the, global dynamics as nearest neighbor classifiers. Attractor networks encode knowledge of the, network will be ergodic so. These synapses will be applicable exploring one interesting and neural networks other technology professionals.", "title": "" }, { "docid": "05bc787d000ecf26c8185b084f8d2498", "text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling", "title": "" }, { "docid": "a5d16384d928da7bcce7eeac45f59e2e", "text": "Innovative rechargeable batteries that can effectively store renewable energy, such as solar and wind power, urgently need to be developed to reduce greenhouse gas emissions. All-solid-state batteries with inorganic solid electrolytes and electrodes are promising power sources for a wide range of applications because of their safety, long-cycle lives and versatile geometries. Rechargeable sodium batteries are more suitable than lithium-ion batteries, because they use abundant and ubiquitous sodium sources. Solid electrolytes are critical for realizing all-solid-state sodium batteries. Here we show that stabilization of a high-temperature phase by crystallization from the glassy state dramatically enhances the Na(+) ion conductivity. An ambient temperature conductivity of over 10(-4) S cm(-1) was obtained in a glass-ceramic electrolyte, in which a cubic Na(3)PS(4) crystal with superionic conductivity was first realized. All-solid-state sodium batteries, with a powder-compressed Na(3)PS(4) electrolyte, functioned as a rechargeable battery at room temperature.", "title": "" }, { "docid": "1d72e3bbc8106a8f360c05bd0a638f0d", "text": "Advancements in computer vision, natural language processing and deep learning techniques have resulted in the creation of intelligent systems that have achieved impressive results in the visually grounded tasks such as image captioning and visual question answering (VQA). VQA is a task that can be used to evaluate a system's capacity to understand an image. It requires an intelligent agent to answer a natural language question about an image. The agent must ground the question into the image and return a natural language answer. One of the latest techniques proposed to tackle this task is the attention mechanism. It allows the agent to focus on specific parts of the input in order to answer the question. In this paper we propose a novel long short-term memory (LSTM) architecture that uses dual attention to focus on specific question words and parts of the input image in order to generate the answer. We evaluate our solution on the recently proposed Visual 7W dataset and show that it performs better than state of the art. Additionally, we propose two new question types for this dataset in order to improve model evaluation. We also make a qualitative analysis of the results and show the strength and weakness of our agent.", "title": "" }, { "docid": "7f0a721287ed05c67c5ecf1206bab4e6", "text": "This study underlines the value of the brand personality and its influence on consumer’s decision making, through relational variables. An empirical study, in which 380 participants have received an SMS ad, confirms that brand personality does actually influence brand trust, brand attachment and brand commitment. The levels of brand sensitivity and involvement have also an impact on the brand personality and on its related variables.", "title": "" }, { "docid": "192e1bd5baa067b563edb739c05decfa", "text": "This paper presents a simple and accurate design methodology for LLC resonant converters, based on a semi- empirical approach to model steady-state operation in the \"be- low-resonance\" region. This model is framed in a design strategy that aims to design a converter capable of operating with soft-switching in the specified input voltage range with a load ranging from zero up to the maximum specified level.", "title": "" }, { "docid": "3b5ef354f7ad216ca0bfcf893352bfce", "text": "We offer the concept of multicommunicating to describe overlapping conversations, an increasingly common occurrence in the technology-enriched workplace. We define multicommunicating, distinguish it from other behaviors, and develop propositions for future research. Our work extends the literature on technology-stimulated restructuring and reveals one of the opportunities provided by lean media—specifically, an opportunity to multicommunicate. We conclude that the concept of multicommunicating has value both to the scholar and to the practicing manager.", "title": "" }, { "docid": "3d335bfc7236ea3596083d8cae4f29e3", "text": "OBJECTIVE\nTo summarise the applications and appropriate use of Dietary Reference Intakes (DRIs) as guidance for nutrition and health research professionals in the dietary assessment of groups and individuals.\n\n\nDESIGN\nKey points from the Institute of Medicine report, Dietary Reference Intakes: Applications in Dietary Assessment, are summarised in this paper. The different approaches for using DRIs to evaluate the intakes of groups vs. the intakes of individuals are highlighted.\n\n\nRESULTS\nEach of the new DRIs is defined and its role in the dietary assessment of groups and individuals is described. Two methods of group assessment and a new method for quantitative assessment of individuals are described. Illustrations are provided on appropriate use of the Estimated Average Requirement (EAR), the Adequate Intake (AI) and the Tolerable Upper Intake Level (UL) in dietary assessment.\n\n\nCONCLUSIONS\nDietary assessment of groups or individuals must be based on estimates of usual (long-term) intake. The EAR is the appropriate DRI to use in assessing groups and individuals. The AI is of limited value in assessing nutrient adequacy, and cannot be used to assess the prevalence of inadequacy. The UL is the appropriate DRI to use in assessing the proportion of a group at risk of adverse health effects. It is inappropriate to use the Recommended Dietary Allowance (RDA) or a group mean intake to assess the nutrient adequacy of groups.", "title": "" }, { "docid": "433e7a8c4d4a16f562f9ae112102526e", "text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.", "title": "" }, { "docid": "c51e1b845d631e6d1b9328510ef41ea0", "text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.", "title": "" }, { "docid": "6831c633bf7359b8d22296b52a9a60b8", "text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.", "title": "" }, { "docid": "d8583f5409aa230236ba1748bd9ef7b3", "text": "Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and highdimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.", "title": "" }, { "docid": "771834bc4bfe8231fe0158ec43948bae", "text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.", "title": "" }, { "docid": "70fafdedd05a40db5af1eabdf07d431c", "text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.", "title": "" }, { "docid": "59308c5361d309568a94217c79cf0908", "text": "Want to get experience? Want to get any ideas to create new things in your life? Read cryptography an introduction to computer security now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.", "title": "" }, { "docid": "8b3557219674c8441e63e9b0ab459c29", "text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.", "title": "" }, { "docid": "686585ee0ab55dfeaa98efef5b496035", "text": "This paper presents an embedded adaptive robust controller for trajectory tracking and stabilization of an omnidirectional mobile platform with parameter variations and uncertainties caused by friction and slip. Based on a dynamic model of the platform, the adaptive controller to achieve point stabilization, trajectory tracking, and path following is synthesized via the adaptive backstepping approach. This robust adaptive controller is then implemented into a high-performance field-programmable gate array chip using hardware/software codesign technique and system-on-a-programmable-chip design concept with a reusable user intellectual property core library. Furthermore, a soft-core processor and a real-time operating system are embedded into the same chip for realizing the control law to steer the mobile platform. Simulation results are conducted to show the effectiveness and merit of the proposed control method in comparison with a conventional proportional-integral feedback controller. The performance and applicability of the proposed embedded adaptive controller are exemplified by conducting several experiments on an autonomous omnidirectional mobile robot.", "title": "" }, { "docid": "66a72238b7e9470eef9584c7018bb20e", "text": "Enamel thickness of the maxillary permanent central incisors and canines in seven Finnish 47,XXX females, their first-degree male and female relatives, and control males and females from the general population were determined from radiographs. The results showed that enamel in the teeth of 47,XXX females was clearly thicker than that of normal controls. On the other hand, the thickness of “dentin” (distance between mesial and distal dentinoenamel junctions) in 47,XXX females' teeth was about the same as that in normal control females, but clearly reduced as compared with that in control males. It is therefore obvious that in the triple-X chromosome complement the extra X chromosome is active in amelogenesis, whereas it has practically no influence on the growth of dentin. The calculations based on present and previous results in 45,X females and 47,XYY males indicate that the X chromosome increases metric enamel growth somewhat more effectively than the Y chromosome. Possibly, halfway states exist between active and repressed enamel genes on the X chromosome. The Y chromosome seems to promote dental growth in a holistic fashion.", "title": "" } ]
scidocsrr
5fc2ffd04afe6ed7ec6d7e687a518403
New multi-stage similarity measure for calculation of pairwise patent similarity in a patent citation network
[ { "docid": "09c9a0990946fd884df70d4eeab46ecc", "text": "Studies of technological change constitute a field of growing importance and sophistication. In this paper we contribute to the discussion with a methodological reflection and application of multi-stage patent citation analysis for the mea surement of inventive progress. Investigating specific patterns of patent citation data, we conclude that single-stage citation analysis cannot reveal technological paths or linea ges. Therefore, one should also make use of indirect citations and bibliographical coupling. To measure aspects of cumulative inventive progress, we develop a “shared specialization measu r ” of patent families. We relate this measure to an expert rating of the technological va lue dded in the field of variable valve actuation for internal combustion engines. In sum, the study presents promising evidence for multi-stage patent citation analysis in order to ex plain aspects of technological change. JEL classification: O31", "title": "" } ]
[ { "docid": "8da8ecae2ae9f49135dd3480992069f0", "text": "In this paper, we investigate the use of decentralized blockchain mechanisms for delivering transparent, secure, reliable, and timely energy flexibility, under the form of adaptation of energy demand profiles of Distributed Energy Prosumers, to all the stakeholders involved in the flexibility markets (Distribution System Operators primarily, retailers, aggregators, etc.). In our approach, a blockchain based distributed ledger stores in a tamper proof manner the energy prosumption information collected from Internet of Things smart metering devices, while self-enforcing smart contracts programmatically define the expected energy flexibility at the level of each prosumer, the associated rewards or penalties, and the rules for balancing the energy demand with the energy production at grid level. Consensus based validation will be used for demand response programs validation and to activate the appropriate financial settlement for the flexibility providers. The approach was validated using a prototype implemented in an Ethereum platform using energy consumption and production traces of several buildings from literature data sets. The results show that our blockchain based distributed demand side management can be used for matching energy demand and production at smart grid level, the demand response signal being followed with high accuracy, while the amount of energy flexibility needed for convergence is reduced.", "title": "" }, { "docid": "d8c6ad404d8d8c69f9f6bd28911a0937", "text": "A hybrid hydrologic estimation model is presented with the aim of performing accurate river flow forecasts without the need of using prior knowledge from the experts in the field. The problem of predicting stream flows is a non-trivial task because the various physical mechanisms governing the river flow dynamics act on a wide range of temporal and spatial scales and almost all the mechanisms involved in the river flow process present some degree of nonlinearity. The proposed system incorporates both statistical and artificial intelligence techniques used at different stages of the reasoning cycle in order to calculate the mean daily water volume forecast of the Salvajina reservoir inflow located at the Department of Cauca, Colombia. The accuracy of the proposed model is compared against other well-known artificial intelligence techniques and several statistical tools previously applied in time series forecasting. The results obtained from the experiments carried out using real data from years 1950 to 2006 demonstrate the superiority of the hybrid system. © 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "71a9394d995cefb8027bed3c56ec176c", "text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%", "title": "" }, { "docid": "3a322129019eed67686018404366fe0b", "text": "Scientists and casual users need better ways to query RDF databases or Linked Open Data. Using the SPARQL query language requires not only mastering its syntax and semantics but also understanding the RDF data model, the ontology used, and URIs for entities of interest. Natural language query systems are a powerful approach, but current techniques are brittle in addressing the ambiguity and complexity of natural language and require expensive labor to supply the extensive domain knowledge they need. We introduce a compromise in which users give a graphical \"skeleton\" for a query and annotates it with freely chosen words, phrases and entity names. We describe a framework for interpreting these \"schema-agnostic queries\" over open domain RDF data that automatically translates them to SPARQL queries. The framework uses semantic textual similarity to find mapping candidates and uses statistical approaches to learn domain knowledge for disambiguation, thus avoiding expensive human efforts required by natural language interface systems. We demonstrate the feasibility of the approach with an implementation that performs well in an evaluation on DBpedia data.", "title": "" }, { "docid": "404a6a58adbd5277e10486d924af8795", "text": "Data centers (DCs), owing to the exponential growth of Internet services, have emerged as an irreplaceable and crucial infrastructure to power this ever-growing trend. A DC typically houses a large number of computing and storage nodes, interconnected by a specially designed network, namely, DC network (DCN). The DCN serves as a communication backbone and plays a pivotal role in optimizing DC operations. However, compared to the traditional network, the unique requirements in the DCN, for example, large scale, vast application diversity, high power density, and high reliability, pose significant challenges to its infrastructure and operations. We have observed from the premium publication venues (e.g., journals and system conferences) that increasing research efforts are being devoted to optimize the design and operations of the DCN. In this paper, we aim to present a systematic taxonomy and survey of recent research efforts on the DCN. Specifically, we propose to classify these research efforts into two areas: 1) DCN infrastructure and 2) DCN operations. For the former aspect, we review and compare the list of transmission technologies and network topologies used or proposed in the DCN infrastructure. For the latter aspect, we summarize the existing traffic control techniques in the DCN operations, and survey optimization methods to achieve diverse operational objectives, including high network utilization, fair bandwidth sharing, low service latency, low energy consumption, high resiliency, and etc., for efficient DC operations. We finally conclude this survey by envisioning a few open research opportunities in DCN infrastructure and operations.", "title": "" }, { "docid": "0e144e826ab88464c9e8166b84b483b8", "text": "Video-on-demand streaming services have gained popularity over the past few years. An increase in the speed of the access networks has also led to a larger number of users watching videos online. Online video streaming traffic is estimated to further increase from the current value of 57% to 69% by 2017, Cisco, 2014. In order to retain the existing users and attract new users, service providers attempt to satisfy the user's expectations and provide a satisfactory viewing experience. The first step toward providing a satisfactory service is to be able to quantify the users' perception of the current service level. Quality of experience (QoE) is a quality metric that provides a holistic measure of the users' perception of the quality. In this survey, we first present a tutorial overview of the popular video streaming techniques deployed for stored videos, followed by identifying various metrics that could be used to quantify the QoE for video streaming services; finally, we present a comprehensive survey of the literature on various tools and measurement methodologies that have been proposed to measure or predict the QoE of online video streaming services.", "title": "" }, { "docid": "cf61f1ecc010e5c021ebbfcf5cbfecf6", "text": "Arachidonic acid plays a central role in a biological control system where such oxygenated derivatives as prostaglandins, thromboxanes, and leukotrienes are mediators. The leukotrienes are formed by transformation of arachidonic acid into an unstable epoxide intermediate, leukotriene A4, which can be converted enzymatically by hydration to leukotriene B4, and by addition of glutathione to leukotriene C4. This last compound is metabolized to leukotrienes D4 and E4 by successive elimination of a gamma-glutamyl residue and glycine. Slow-reacting substance of anaphylaxis consists of leukotrienes C4, D4, and E4. The cysteinyl-containing leukotrienes are potent bronchoconstrictors, increase vascular permeability in postcapillary venules, and stimulate mucus secretion. Leukotriene B4 causes adhesion and chemotactic movement of leukocytes and stimulates aggregation, enzyme release, and generation of superoxide in neutrophils. Leukotrienes C4, D4, and E4, which are released from the lung tissue of asthmatic subjects exposed to specific allergens, seem to play a pathophysiological role in immediate hypersensitivity reactions. These leukotrienes, as well as leukotriene B4, have pro-inflammatory effects.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "2f1862591d5f9ee80d7cdcb930f86d8d", "text": "In this research convolutional neural networks are used to recognize whether a car on a given image is damaged or not. Using transfer learning to take advantage of available models that are trained on a more general object recognition task, very satisfactory performances have been achieved, which indicate the great opportunities of this approach. In the end, also a promising attempt in classifying car damages into a few different classes is presented. Along the way, the main focus was on the influence of certain hyper-parameters and on seeking theoretically founded ways to adapt them, all with the objective of progressing to satisfactory results as fast as possible. This research open doors for future collaborations on image recognition projects in general and for the car insurance field in particular.", "title": "" }, { "docid": "4cb475f264a8773dc502c9bfdd7b260c", "text": "Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator Grasplt!.", "title": "" }, { "docid": "f91ba9074d4c4883e4ef6672cd696247", "text": "Contemporary benchmark methods for image inpainting are based on deep generative models and specifically leverage adversarial loss for yielding realistic reconstructions. However, these models cannot be directly applied on image/video sequences because of an intrinsic drawbackthe reconstructions might be independently realistic, but, when visualized as a sequence, often lacks fidelity to the original uncorrupted sequence. The fundamental reason is that these methods try to find the best matching latent space representation near to natural image manifold without any explicit distance based loss. In this paper, we present a semantically conditioned Generative Adversarial Network (GAN) for sequence inpainting. The conditional information constrains the GAN to map a latent representation to a point in image manifold respecting the underlying pose and semantics of the scene. To the best of our knowledge, this is the first work which simultaneously addresses consistency and correctness of generative model based inpainting. We show that our generative model learns to disentangle pose and appearance information; this independence is exploited by our model to generate highly consistent reconstructions. The conditional information also aids the generator network in GAN to produce sharper images compared to the original GAN formulation. This helps in achieving more appealing inpainting performance. Though generic, our algorithm was targeted for inpainting on faces. When applied on CelebA and Youtube Faces datasets, the proposed method results in significant improvement over the current benchmark, both in terms of quantitative evaluation (Peak Signal to Noise Ratio) and human visual scoring over diversified combinations of resolutions and deformations. Figure 1. Exemplary success of our model in simultaneously preserving facial semantics(appearance and expressions) and improving inpaiting quality. Benchmark generative models such as DIP [49] are agnostic to holistic facial semantics and thus generate independently realistic, yet structurally inconsistent solutions.", "title": "" }, { "docid": "ae57246e37060c8338ad9894a19f1b6b", "text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.", "title": "" }, { "docid": "1d195fb4df8375772674d0852a046548", "text": "All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems). The loss of image details due to A/D quantization is complete and it cannot be recovered by traditional image processing methods, but the modern data-driven machine learning approach offers a much needed cure to the problem. In this work we propose a novel approach to restore and enhance images acquired in low and uneven lighting. First, the ill illumination is algorithmically compensated by emulating the effects of artificial supplementary lighting. Then a DCNN trained using only synthetic data recovers the missing detail caused by quantization.", "title": "" }, { "docid": "1cf029e7284359e3cdbc12118a6d4bf5", "text": "Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle-filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsification in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance-based methods, and multihypothesis techniques. The third development discussed in this tutorial is the trend towards richer appearance-based models of landmarks and maps. While initially motivated by problems in data association and loop closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. The environment representation section surveys current developments in this area along a number of lines, including delayed mapping, the use of nongeometric landmarks, and trajectory estimation methods. SLAM methods have now reached a state of considerable maturity. Future challenges will center on methods enabling large-scale implementations in increasingly unstructured environments and especially in situations where GPS-like solutions are unavailable or unreliable: in urban canyons, under foliage, under water, or on remote planets.", "title": "" }, { "docid": "e493bbcf5f2b561757ca795ab6bb1099", "text": "As a spatio-temporal data-management problem, taxi ridesharing has received a lot of attention recently in the database literature. The broader scientific community, and the commercial world have also addressed the issue through services such as UberPool and Lyftline. The issues addressed have been efficient matching of passengers and taxis, fares, and savings from ridesharing. However, ridesharing fairness has not been addressed so far. Ridesharing fairness is a new problem that we formally define in this paper. We also propose a method of combining the benefits of fair and optimal ridesharing, and of efficiently executing fair and optimal ridesharing queries.", "title": "" }, { "docid": "d9b3f5613a93fcaf1fee35c1c5effee2", "text": "The socio-economic condition & various health hazards are the main suffering at old age. To combat this situation, we have tried to develop and fabricate one wearable electronic rescue system for elderly especially when he is at home alone. The system can detect abnormal condition of heart as well as sudden accidental fall at home. The system has been developed using Arduino Microcontroller and GSM modem. The entire program and evaluation has been developed under LabView platform. The prototype was built and trialed successfully.", "title": "" }, { "docid": "6d323f8dbfd7d2883a4926b80097727c", "text": "This work presents a novel geospatial mapping service, based on OpenStreetMap, which has been designed and developed in order to provide personalized path to users with special needs. This system gathers data related to barriers and facilities of the urban environment via crowd sourcing and sensing done by users. It also considers open data provided by bus operating companies to identify the actual accessibility feature and the real time of arrival at the stops of the buses. The resulting service supports citizens with reduced mobility (users with disabilities and/or elderly people) suggesting urban paths accessible to them and providing information related to travelling time, which are tailored to their abilities to move and to the bus arrival time. The manuscript demonstrates the effectiveness of the approach by means of a case study focusing on the differences between the solutions provided by our system and the ones computed by main stream geospatial mapping services.", "title": "" }, { "docid": "ddb51863430250a28f37c5f12c13c910", "text": "Much of our understanding of human thinking is based on probabilistic models. This innovative book by Jerome R. Busemeyer and Peter D. Bruza argues that, actually, the underlying mathematical structures from quantum theory provide a much better account of human thinking than traditional models. They introduce the foundations for modelling probabilistic-dynamic systems using two aspects of quantum theory. The first, “contextuality,” is a way to understand interference effects found with inferences and decisions under conditions of uncertainty. The second, “quantum entanglement,” allows cognitive phenomena to be modelled in non-reductionist ways. Employing these principles drawn from quantum theory allows us to view human cognition and decision in a totally new light. Introducing the basic principles in an easy-to-follow way, this book does not assume a physics background or a quantum brain and comes complete with a tutorial and fully worked-out applications in important areas of cognition and decision.", "title": "" }, { "docid": "a7226ab0968d252bad65931bcc0bc089", "text": "The coupling of renewable energy and hydrogen technologies represents in the mid-term a very interesting way to match the tasks of increasing the reliable exploitation of wind and sea wave energy and introducing clean technologies in the transportation sector. This paper presents two different feasibility studies: the first proposes two plants based on wind and sea wave resource for the production, storage and distribution of hydrogen for public transportation facilities in the West Sicily; the second applies the same approach to Pantelleria (a smaller island), including also some indications about solar resource. In both cases, all buses will be equipped with fuel-cells. A first economic analysis is presented together with the assessment of the avoidable greenhouse gas emissions during the operation phase. The scenarios addressed permit to correlate the demand of urban transport to renewable resources present in the territories and to the modern technologies available for the production of hydrogen from renewable energies. The study focuses on the possibility of tapping the renewable energy potential (wind and sea wave) for the hydrogen production by electrolysis. The use of hydrogen would significantly reduce emissions of particulate matter and greenhouse gases in urban districts under analysis. The procedures applied in the present article, as well as the main equations used, are the result of previous applications made in different technical fields that show a good replicability.", "title": "" }, { "docid": "c5639c65908882291c29e147605c79ca", "text": "Dirofilariasis is a rare disease in humans. We report here a case of a 48-year-old male who was diagnosed with pulmonary dirofilariasis in Korea. On chest radiographs, a coin lesion of 1 cm in diameter was shown. Although it looked like a benign inflammatory nodule, malignancy could not be excluded. So, the nodule was resected by video-assisted thoracic surgery. Pathologically, chronic granulomatous inflammation composed of coagulation necrosis with rim of fibrous tissues and granulations was seen. In the center of the necrotic nodules, a degenerating parasitic organism was found. The parasite had prominent internal cuticular ridges and thick cuticle, a well-developed muscle layer, an intestinal tube, and uterine tubules. The parasite was diagnosed as an immature female worm of Dirofilaria immitis. This is the second reported case of human pulmonary dirofilariasis in Korea.", "title": "" } ]
scidocsrr
9682be37139cd83d4b18eb6222e43533
Capacitive Biopotential Measurement for Electrophysiological Signal Acquisition: A Review
[ { "docid": "991ab90963355f16aa2a83655577ba54", "text": "Highly durable, flexible, and even washable multilayer electronic circuitry can be constructed on textile substrates, using conductive yarns and suitably packaged components. In this paper we describe the development of e-broidery (electronic embroidery, i.e., the patterning of conductive textiles by numerically controlled sewing or weaving processes) as a means of creating computationally active textiles. We compare textiles to existing flexible circuit substrates with regard to durability, conformability, and wearability. We also report on: some unique applications enabled by our work; the construction of sensors and user interface elements in textiles; and a complete process for creating flexible multilayer circuits on fabric substrates. This process maintains close compatibility with existing electronic components and design tools, while optimizing design techniques and component packages for use in textiles. E veryone wears clothing. It conveys a sense of the wearer's identity, provides protection from the environment, and supplies a convenient way to carry all the paraphernalia of daily life. Of course, clothing is made from textiles, which are themselves among the first composite materials engineered by humans. Textiles have mechanical, aesthetic, and material advantages that make them ubiquitous in both society and industry. The woven structure of textiles and spun fibers makes them durable, washable, and conformal, while their composite nature affords tremendous variety in their texture, for both visual and tactile senses. Sadly, not everyone wears a computer, although there is presently a great deal of interest in \" wear-able computing. \" 1 Wearable computing may be seen as the result of a design philosophy that integrates embedded computation and sensing into everyday life to give users continuous access to the capabilities of personal computing. Ideally, computers would be as convenient, durable, and comfortable as clothing, but most wearable computers still take an awkward form that is dictated by the materials and processes traditionally used in electronic fabrication. The design principle of packaging electronics in hard plastic boxes (no matter how small) is pervasive, and alternatives are difficult to imagine. As a result, most wearable computing equipment is not truly wearable except in the sense that it fits into a pocket or straps onto the body. What is needed is a way to integrate technology directly into textiles and clothing. Furthermore, textile-based computing is not limited to applications in wearable computing; in fact, it is broadly applicable to ubiquitous computing, allowing the integration of interactive elements into furniture and decor in general. In …", "title": "" }, { "docid": "c2482e67cb4db7ee888b56d952ce76c2", "text": "To obtain maximum unobtrusiveness with sensors for monitoring health parameters on the human body, two technical solutions are combined. First we propose contactless sensors for capacitive electromyography measurements. Secondly, the sensors are integrated into textile, so complete fusion with a wearable garment is enabled. We are presenting the first successful measurements with such sensors. Keywords— surface electromyography, capacitive transducer, embroidery, textile electronics, interconnect", "title": "" }, { "docid": "4a5d4db892145324597bd8d6b98c009f", "text": "Advances in wireless communication technologies, such as wearable and implantable biosensors, along with recent developments in the embedded computing area are enabling the design, development, and implementation of body area networks. This class of networks is paving the way for the deployment of innovative healthcare monitoring applications. In the past few years, much of the research in the area of body area networks has focused on issues related to wireless sensor designs, sensor miniaturization, low-power sensor circuitry, signal processing, and communications protocols. In this paper, we present an overview of body area networks, and a discussion of BAN communications types and their related issues. We provide a detailed investigation of sensor devices, physical layer, data link layer, and radio technology aspects of BAN research. We also present a taxonomy of BAN projects that have been introduced/proposed to date. Finally, we highlight some of the design challenges and open issues that still need to be addressed to make BANs truly ubiquitous for a wide range of applications. M. Chen · S. Gonzalez · H. Cao · V. C. M. Leung Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, Canada M. Chen School of Computer Science and Engineering, Seoul National University, Seoul, South Korea A. Vasilakos (B) Department of Computer and Telecommunications Engineering, University of Western Macedonia, Macedonia, Greece e-mail: [email protected]", "title": "" } ]
[ { "docid": "eb7c34c4959c39acb18fc5920ff73dba", "text": "Acoustic evidence suggests that contemporary Seoul Korean may be developing a tonal system, which is arising in the context of a nearly completed change in how speakers use voice onset time (VOT) to mark the language’s distinction among tense, lax and aspirated stops.Data from 36 native speakers of varying ages indicate that while VOT for tense stops has not changed since the 1960s, VOT differences between lax and aspirated stops have decreased, in some cases to the point of complete overlap. Concurrently, the mean F0 for words beginning with lax stops is significantly lower than the mean F0 for comparable words beginning with tense or aspirated stops. Hence the underlying contrast between lax and aspirated stops is maintained by younger speakers, but is phonetically manifested in terms of differentiated tonal melodies: laryngeally unmarked (lax) stops trigger the introduction of a default L tone, while laryngeally marked stops (aspirated and tense) introduce H, triggered by a feature specification for [stiff].", "title": "" }, { "docid": "c7539441ff7076fa32074ed0ed314e38", "text": "Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN.", "title": "" }, { "docid": "66ab561342d6f0c80a0eb8d4c2b19a97", "text": "Impedance spectroscopy of biological cells has been used to monitor cell status, e.g. cell proliferation, viability, etc. It is also a fundamental method for the study of the electrical properties of cells which has been utilised for cell identification in investigations of cell behaviour in the presence of an applied electric field, e.g. electroporation. There are two standard methods for impedance measurement on cells. The use of microelectrodes for single cell impedance measurement is one method to realise the measurement, but the variations between individual cells introduce significant measurement errors. Another method to measure electrical properties is by the measurement of cell suspensions, i.e. a group of cells within a culture medium or buffer. This paper presents an investigation of the impedance of normal and cancerous breast cells in suspension using the Maxwell-Wagner mixture theory to analyse the results and extract the electrical parameters of a single cell. The results show that normal and different stages of cancer breast cells can be distinguished by the conductivity presented by each cell.", "title": "" }, { "docid": "350137bf3c493b23aa6d355df946440f", "text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.", "title": "" }, { "docid": "2bd9f317404d556b5967e6dcb6832b1b", "text": "Ischemic Heart Disease (IHD) and stroke are statistically the leading causes of death world-wide. Both diseases deal with various types of cardiac arrhythmias, e.g. premature ventricular contractions (PVCs), ventricular and supra-ventricular tachycardia, atrial fibrillation. For monitoring and detecting such an irregular heart rhythm accurately, we are now developing a very cost-effective ECG monitor, which is implemented in 8-bit MCU with an efficient QRS detector using steep-slope algorithm and arrhythmia detection algorithm using a simple heart rate variability (HRV) parameter. This work shows the results of evaluating the real-time steep-slope algorithm using MIT-BIH Arrhythmia Database. The performance of this algorithm has 99.72% of sensitivity and 99.19% of positive predictivity. We then show the preliminary results of arrhythmia detection using various types of normal and abnormal ECGs from an ECG simulator. The result is, 18 of 20 ECG test signals were correctly detected.", "title": "" }, { "docid": "fc25e19d03a6686a0829a823d97cedbe", "text": "OBJECTIVE\nThe problem of identifying, in advance, the most effective treatment agent for various psychiatric conditions remains an elusive goal. To address this challenge, we investigate the performance of the proposed machine learning (ML) methodology (based on the pre-treatment electroencephalogram (EEG)) for prediction of response to treatment with a selective serotonin reuptake inhibitor (SSRI) medication in subjects suffering from major depressive disorder (MDD).\n\n\nMETHODS\nA relatively small number of most discriminating features are selected from a large group of candidate features extracted from the subject's pre-treatment EEG, using a machine learning procedure for feature selection. The selected features are fed into a classifier, which was realized as a mixture of factor analysis (MFA) model, whose output is the predicted response in the form of a likelihood value. This likelihood indicates the extent to which the subject belongs to the responder vs. non-responder classes. The overall method was evaluated using a \"leave-n-out\" randomized permutation cross-validation procedure.\n\n\nRESULTS\nA list of discriminating EEG biomarkers (features) was found. The specificity of the proposed method is 80.9% while sensitivity is 94.9%, for an overall prediction accuracy of 87.9%. There is a 98.76% confidence that the estimated prediction rate is within the interval [75%, 100%].\n\n\nCONCLUSIONS\nThese results indicate that the proposed ML method holds considerable promise in predicting the efficacy of SSRI antidepressant therapy for MDD, based on a simple and cost-effective pre-treatment EEG.\n\n\nSIGNIFICANCE\nThe proposed approach offers the potential to improve the treatment of major depression and to reduce health care costs.", "title": "" }, { "docid": "5229fb13c66ca8a2b079f8fe46bb9848", "text": "We put forth a lookup-table-based modular reduction method which partitions the binary string of an integer to be reduced into blocks according to its runs. Its complexity depends on the amount of runs in the binary string. We show that the new reduction is almost twice as fast as the popular Barrett’s reduction, and provide a thorough complexity analysis of the method.", "title": "" }, { "docid": "0116f7792bfcd4d675056628544801fb", "text": "Over the last few years, Cloud storage systems and so-called NoSQL datastores have found widespread adoption. In contrast to traditional databases, these storage systems typically sacrifice consistency in favor of latency and availability as mandated by the CAP theorem, so that they only guarantee eventual consistency. Existing approaches to benchmark these storage systems typically omit the consistency dimension or did not investigate eventuality of consistency guarantees. In this work we present a novel approach to benchmark staleness in distributed datastores and use the approach to evaluate Amazon's Simple Storage Service (S3). We report on our unexpected findings.", "title": "" }, { "docid": "38bdfe23b1e62cd162ed18d741f9ba05", "text": "The authors present results of 4 studies that seek to determine the discriminant and incremental validity of the 3 most widely studied traits in psychology-self-esteem, neuroticism, and locus of control-along with a 4th, closely related trait-generalized self-efficacy. Meta-analytic results indicated that measures of the 4 traits were strongly related. Results also demonstrated that a single factor explained the relationships among measures of the 4 traits. The 4 trait measures display relatively poor discriminant validity, and each accounted for little incremental variance in predicting external criteria relative to the higher order construct. In light of these results, the authors suggest that measures purporting to assess self-esteem, locus of control, neuroticism, and generalized self-efficacy may be markers of the same higher order concept.", "title": "" }, { "docid": "bcda77a0de7423a2a4331ff87ce9e969", "text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.", "title": "" }, { "docid": "cec3d18ea5bd7eba435e178e2fcb38b0", "text": "The synthesis of three-degree-of-freedom planar parallel manipulators is performed using a genetic algorithm. The architecture of a manipulator and its position and orientation with respect to a prescribed workspace are determined. The architectural parameters are optimized so that the manipulator’s constantorientation workspace is as close as possible to a prescribed workspace. The manipulator’s workspace is discretized and its dexterity is computed as a global property of the manipulator. An analytical expression of the singularity loci (local null dexterity) can be obtained from the Jacobian matrix determinant, and its intersection with the manipulator’s workspace may be verified and avoided. Results are shown for different conditions. First, the manipulators’ workspaces are optimized for a prescribed workspace, without considering whether the singularity loci intersect it or not. Then the same type of optimization is performed, taking intersections with the singularity loci into account. In the following results, the optimization of the manipulator’s dexterity is also included in an objective function, along with the workspace optimization and the avoidance of singularity loci. Results show that the end-effector’s location has a significant effect on the manipulator’s dexterity. ©2002 John Wiley & Sons, Inc.", "title": "" }, { "docid": "aeadbf476331a67bec51d5d6fb6cc80b", "text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance", "title": "" }, { "docid": "c11e1e156835d98707c383711f4e3953", "text": "We present an approach for automatically generating provably correct abstractions from C source code that are useful for practical implementation verification. The abstractions are easier for a human verification engineer to reason about than the implementation and increase the productivity of interactive code proof. We guarantee soundness by automatically generating proofs that the abstractions are correct.\n In particular, we show two key abstractions that are critical for verifying systems-level C code: automatically turning potentially overflowing machine-word arithmetic into ideal integers, and transforming low-level C pointer reasoning into separate abstract heaps. Previous work carrying out such transformations has either done so using unverified translations, or required significant proof engineering effort.\n We implement these abstractions in an existing proof-producing specification transformation framework named AutoCorres, developed in Isabelle/HOL, and demonstrate its effectiveness in a number of case studies. We show scalability on multiple OS microkernels, and we show how our changes to AutoCorres improve productivity for total correctness by porting an existing high-level verification of the Schorr-Waite algorithm to a low-level C implementation with minimal effort.", "title": "" }, { "docid": "dae40fa32526bf965bad70f98eb51bb7", "text": "Weight pruning methods for deep neural networks (DNNs) have been investigated recently, but prior work in this area is mainly heuristic, iterative pruning, thereby lacking guarantees on the weight reduction ratio and convergence time. To mitigate these limitations, we present a systematic weight pruning framework of DNNs using the alternating direction method of multipliers (ADMM). We first formulate the weight pruning problem of DNNs as a nonconvex optimization problem with combinatorial constraints specifying the sparsity requirements, and then adopt the ADMM framework for systematic weight pruning. By using ADMM, the original nonconvex optimization problem is decomposed into two subproblems that are solved iteratively. One of these subproblems can be solved using stochastic gradient descent, the other can be solved analytically. Besides, our method achieves a fast convergence rate. The weight pruning results are very promising and consistently outperform the prior work. On the LeNet-5 model for the MNIST data set, we achieve 71.2× weight reduction without accuracy loss. On the AlexNet model for the ImageNet data set, we achieve 21× weight reduction without accuracy loss. When we focus on the convolutional layer pruning for computation reductions, we can reduce the total computation by five times compared with the prior work (achieving a total of 13.4× weight reduction in convolutional layers). Our models and codes are released at https://github.com/KaiqiZhang/admm-pruning.", "title": "" }, { "docid": "630901f1a1b25a5a2af65b566505de65", "text": "In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.", "title": "" }, { "docid": "0991b582ad9fcc495eb534ebffe3b5f8", "text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.", "title": "" }, { "docid": "8ce3fc72fa132b8baeff35035354d194", "text": "Raman spectroscopy is a molecular vibrational spectroscopic technique that is capable of optically probing the biomolecular changes associated with diseased transformation. The purpose of this study was to explore near-infrared (NIR) Raman spectroscopy for identifying dysplasia from normal gastric mucosa tissue. A rapid-acquisition dispersive-type NIR Raman system was utilised for tissue Raman spectroscopic measurements at 785 nm laser excitation. A total of 76 gastric tissue samples obtained from 44 patients who underwent endoscopy investigation or gastrectomy operation were used in this study. The histopathological examinations showed that 55 tissue specimens were normal and 21 were dysplasia. Both the empirical approach and multivariate statistical techniques, including principal components analysis (PCA), and linear discriminant analysis (LDA), together with the leave-one-sample-out cross-validation method, were employed to develop effective diagnostic algorithms for classification of Raman spectra between normal and dysplastic gastric tissues. High-quality Raman spectra in the range of 800–1800 cm−1 can be acquired from gastric tissue within 5 s. There are specific spectral differences in Raman spectra between normal and dysplasia tissue, particularly in the spectral ranges of 1200–1500 cm−1 and 1600–1800 cm−1, which contained signals related to amide III and amide I of proteins, CH3CH2 twisting of proteins/nucleic acids, and the C=C stretching mode of phospholipids, respectively. The empirical diagnostic algorithm based on the ratio of the Raman peak intensity at 875 cm−1 to the peak intensity at 1450 cm−1 gave the diagnostic sensitivity of 85.7% and specificity of 80.0%, whereas the diagnostic algorithms based on PCA-LDA yielded the diagnostic sensitivity of 95.2% and specificity 90.9% for separating dysplasia from normal gastric tissue. Receiver operating characteristic (ROC) curves further confirmed that the most effective diagnostic algorithm can be derived from the PCA-LDA technique. Therefore, NIR Raman spectroscopy in conjunction with multivariate statistical technique has potential for rapid diagnosis of dysplasia in the stomach based on the optical evaluation of spectral features of biomolecules.", "title": "" }, { "docid": "0df3d30837edd0e7809ed77743a848db", "text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.", "title": "" }, { "docid": "fb4d926254409df9d212b834d492271f", "text": "Restrictive dermopathy (RD) is a rare, fatal, and genetically heterogeneous laminopathy with a predominant autosomal recessive heredity pattern. The phenotype can be caused by mutations in either LMNA (primary laminopathy) or ZMPSTE24 (secondary laminopathy) genes but mostly by homozygous or compound heterozygous ZMPSTE24 mutations. Clinicopathologic findings are unique, allowing a specific diagnosis in most cases. We describe a premature newborn girl of non-consanguineous parents who presented a rigid, translucent and tightly adherent skin, dysmorphic facies, multiple joint contractures and radiological abnormalities. The overall clinical, radiological, histological, and ultrastructural features were typical of restrictive dermopathy. Molecular genetic analysis revealed a homozygous ZMPSTE24 mutation (c.1085_1086insT). Parents and sister were heterozygous asymptomatic carriers. We conclude that RD is a relatively easy and consistent clinical and pathological diagnosis. Despite recent advances in our understanding of RD, the pathogenetic mechanisms of the disease are not entirely clarified. Recognition of RD and molecular genetic diagnosis are important to define the prognosis of an affected child and for recommending genetic counseling to affected families. However, the outcome for a live born patient in the neonatal period is always fatal.", "title": "" }, { "docid": "0dc0815505f065472b3929792de638b4", "text": "Our aim was to comprehensively validate the 1-min sit-to-stand (STS) test in chronic obstructive pulmonary disease (COPD) patients and explore the physiological response to the test.We used data from two longitudinal studies of COPD patients who completed inpatient pulmonary rehabilitation programmes. We collected 1-min STS test, 6-min walk test (6MWT), health-related quality of life, dyspnoea and exercise cardiorespiratory data at admission and discharge. We assessed the learning effect, test-retest reliability, construct validity, responsiveness and minimal important difference of the 1-min STS test.In both studies (n=52 and n=203) the 1-min STS test was strongly correlated with the 6MWT at admission (r=0.59 and 0.64, respectively) and discharge (r=0.67 and 0.68, respectively). Intraclass correlation coefficients (95% CI) between 1-min STS tests were 0.93 (0.83-0.97) for learning effect and 0.99 (0.97-1.00) for reliability. Standardised response means (95% CI) were 0.87 (0.58-1.16) and 0.91 (0.78-1.07). The estimated minimal important difference was three repetitions. End-exercise oxygen consumption, carbon dioxide output, ventilation, breathing frequency and heart rate were similar in the 1-min STS test and 6MWT.The 1-min STS test is a reliable, valid and responsive test for measuring functional exercise capacity in COPD patients and elicited a physiological response comparable to that of the 6MWT.", "title": "" } ]
scidocsrr
72511e7c85c6a895cf9bc70020b609b4
Investigating the success of ERP systems: Case studies in three Taiwanese high-tech industries
[ { "docid": "c332a71f7be8412a9aa37159d7ad6f07", "text": "This paper critically reviews measures of user information satisfaction and selects one for replication and extension. A survey of production managers is used to provide additional support for the instrument, eliminate scales that are psychometrically unsound, and develop a standard short form for use when only an overall assessment of information satisfaction is required and survey time is limited.", "title": "" } ]
[ { "docid": "776de4218230e161570d599440183354", "text": "For the first time, we present a state-of-the-art energy-efficient 16nm technology integrated with FinFET transistors, 0.07um2 high density (HD) SRAM, Cu/low-k interconnect and high density MiM for mobile SoC and computing applications. This technology provides 2X logic density and >35% speed gain or >55% power reduction over our 28nm HK/MG planar technology. To our knowledge, this is the smallest fully functional 128Mb HD FinFET SRAM (with single fin) test-chip demonstrated with low Vccmin for 16nm node. Low leakage (SVt) FinFET transistors achieve excellent short channel control with DIBL of <;30 mV/V and superior Idsat of 520/525 uA/um at 0.75V and Ioff of 30 pA/um for NMOS and PMOS, respectively.", "title": "" }, { "docid": "015dbd7c7d1011802046f9b24df24280", "text": "The Resource Description Framework (RDF) provides a common data model for the integration of “real-time” social and sensor data streams with the Web and with each other. While there exist numerous protocols and data formats for exchanging dynamic RDF data, or RDF updates, these options should be examined carefully in order to enable a Semantic Web equivalent of the high-throughput, low-latency streams of typical Web 2.0, multimedia, and gaming applications. This paper contains a brief survey of RDF update formats and a high-level discussion of both TCP and UDPbased transport protocols for updates. Its main contribution is the experimental evaluation of a UDP-based architecture which serves as a real-world example of a high-performance RDF streaming application in an Internet-scale distributed environment.", "title": "" }, { "docid": "f6b974c04dceaea3176a0092304bab72", "text": "Information-Centric Networking (ICN) has recently emerged as a promising Future Internet architecture that aims to cope with the increasing demand for highly scalable and efficient distribution of content. Moving away from the Internet communication model based in addressable hosts, ICN leverages in-network storage for caching, multi-party communication through replication, and interaction models that decouple senders and receivers. This novel networking approach has the potential to outperform IP in several dimensions, besides just content dissemination. Concretely, the rise of the Internet of Things (IoT), with its rich set of challenges and requirements placed over the current Internet, provide an interesting ground for showcasing the contribution and performance of ICN mechanisms. This work analyses how the in-network caching mechanisms associated to ICN, particularly those implemented in the Content-Centric Networking (CCN) architecture, contribute in IoT environments, particularly in terms of energy consumption and bandwidth usage. A simulation comparing IP and the CCN architecture (an instantiation of ICN) in IoT environments demonstrated that CCN leads to a considerable reduction of the energy consumed by the information producers and to a reduction of bandwidth requirements, as well as highlighted the flexibility for adapting current ICN caching mechanisms to target specific requirements of IoT.", "title": "" }, { "docid": "8988596b2b38cf61b8d0f7bb3ad8f5d7", "text": "National cyber security centers (NCSCs) are gaining more and more importance to ensure the security and proper operations of critical infrastructures (CIs). As a prerequisite, NCSCs need to collect, analyze, process, assess and share security-relevant information from infrastructure operators. A vital capability of mentioned NCSCs is to establish Cyber Situational Awareness (CSA) as a precondition for understanding the security situation of critical infrastructures. This is important for proper risk assessment and subsequent reduction of potential attack surfaces at national level. In this paper, we therefore survey theoretical models relevant for Situational Awareness (SA) and present a collaborative CSA model for NCSCs in order to enhance the protection of CIs at national level. Additionally, we provide an application scenario to illustrate a handson case of utilizing a CSA model in a NCSC, especially focusing on information sharing. We foresee this illustrative scenario to aid decision makers and practitioners who are involved in establishing NCSCs and cyber security processes on national level to better understand the specific implications regarding the application of the CSA model for NCSCs.", "title": "" }, { "docid": "9254b7c1f6a0393524d68aaa683dab58", "text": "Millions of users share their opinions on Twitter, making it a valuable platform for tracking and analyzing public sentiment. Such tracking and analysis can provide critical information for decision making in various domains. Therefore it has attracted attention in both academia and industry. Previous research mainly focused on modeling and tracking public sentiment. In this work, we move one step further to interpret sentiment variations. We observed that emerging topics (named foreground topics) within the sentiment variation periods are highly related to the genuine reasons behind the variations. Based on this observation, we propose a Latent Dirichlet Allocation (LDA) based model, Foreground and Background LDA (FB-LDA), to distill foreground topics and filter out longstanding background topics. These foreground topics can give potential interpretations of the sentiment variations. To further enhance the readability of the mined reasons, we select the most representative tweets for foreground topics and develop another generative model called Reason Candidate and Background LDA (RCB-LDA) to rank them with respect to their “popularity” within the variation period. Experimental results show that our methods can effectively find foreground topics and rank reason candidates. The proposed models can also be applied to other tasks such as finding topic differences between two sets of documents.", "title": "" }, { "docid": "21ac4dac4ddbdfd271e6f546405fb3f7", "text": "This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https://github.com/rbgirshick/fast-rcnn.", "title": "" }, { "docid": "b8a7eb324085eef83f88185b9544d5b5", "text": "The research in the area of game accessibility has grown significantly since the last time it was examined in 2005. This paper examines the body of work between 2005 and 2010. We selected a set of papers on topics we felt represented the scope of the field, but were not able to include all papers on the subject. A summary of the research we examined is provided, along with suggestions for future work in game accessibility. It is hoped that this summary will prompt others to perform further research in this area.", "title": "" }, { "docid": "8856fa1c0650970da31fae67cd8dcd86", "text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.", "title": "" }, { "docid": "843816964f6862bee7981229ccaf6432", "text": "We present a practical approach to global motion planning and terrain assessment for ground robots in generic three-dimensional (3D) environments, including rough outdoor terrain, multilevel facilities, and more complex geometries. Our method computes optimized six-dimensional trajectories compliant with curvature and continuity constraints directly on unordered point cloud maps, omitting any kind of explicit surface reconstruction, discretization, or topology extraction. We assess terrain geometry and traversability on demand during motion planning, by fitting robot-sized planar patches to the map and analyzing the local distribution of map points. Our motion planning approach consists of sampling-based initial trajectory generation, followed by precise local optimization according to a custom cost measure, using a novel, constraint-aware trajectory optimization paradigm. We embed these methods in a complete autonomous navigation system based on localization and mapping by means of a 3D laser scanner and iterative closest point matching, suitable for both static and dynamic environments. The performance of the planning and terrain assessment algorithms is evaluated in offline experiments using recorded and simulated sensor data. Finally, we present the results of navigation experiments in three different environments—rough outdoor terrain, a two-level parking garage, and a dynamic environment, demonstrating how the proposed methods enable autonomous navigation in complex 3D terrain.", "title": "" }, { "docid": "75952b1d2c9c2f358c4c2e3401a00245", "text": "This book is an outstanding contribution to the philosophical study of language and mind, by one of the most influential thinkers of our time. In a series of penetrating essays, Noam Chomsky cuts through the confusion and prejudice which has infected the study of language and mind, bringing new solutions to traditional philosophical puzzles and fresh perspectives on issues of general interest, ranging from the mind–body problem to the unification of science. Using a range of imaginative and deceptively simple linguistic analyses, Chomsky argues that there is no coherent notion of “language” external to the human mind, and that the study of language should take as its focus the mental construct which constitutes our knowledge of language. Human language is therefore a psychological, ultimately a “biological object,” and should be analysed using the methodology of the natural sciences. His examples and analyses come together in this book to give a unique and compelling perspective on language and the mind.", "title": "" }, { "docid": "35e662f6c1d75e6878a78c4c443b9448", "text": "ÐThis paper introduces a refined general definition of a skeleton that is based on a penalized-distance function and cannot create any of the degenerate cases of the earlier CEASAR and TEASAR algorithms. Additionally, we provide an algorithm that finds the skeleton accurately and rapidly. Our solution is fully automatic, which frees the user from having to engage in manual data preprocessing. We present the accurate skeletons computed on a number of test datasets. The algorithm is very efficient as demonstrated by the running times which were all below seven minutes. Index TermsÐSkeleton, centerline, medial axis, automatic preprocessing, modeling.", "title": "" }, { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "e471e41553bf7c229a38f3d226ff8a28", "text": "Large AC machines are sometimes fed by multiple inverters. This paper presents the complete steady-state analysis of the PM synchronous machine with multiplex windings, suitable for driving by multiple independent inverters. Machines with 4, 6 and 9 phases are covered in detail. Particular attention is given to the magnetic interactions not only between individual phases, but between channels or groups of phases. This is of interest not only for determining performance and designing control systems, but also for analysing fault tolerance. It is shown how to calculate the necessary self- and mutual inductances and how to reduce them to a compact dq-axis model without loss of detail.", "title": "" }, { "docid": "eeda67ba0bc36bd1984789be93d8ce9c", "text": "Using modified constructivist grounded theory, the purpose of the present study was to explore positive body image experiences in people with spinal cord injury. Nine participants (five women, four men) varying in age (21-63 years), type of injury (C3-T7; complete and incomplete), and years post-injury (4-36 years) were recruited. The following main categories were found: body acceptance, body appreciation and gratitude, social support, functional gains, independence, media literacy, broadly conceptualizing beauty, inner positivity influencing outer demeanour, finding others who have a positive body image, unconditional acceptance from others, religion/spirituality, listening to and taking care of the body, managing secondary complications, minimizing pain, and respect. Interestingly, there was consistency in positive body image characteristics reported in this study with those found in previous research, demonstrating universality of positive body image. However, unique characteristics (e.g., resilience, functional gains, independence) were also reported demonstrating the importance of exploring positive body image in diverse groups.", "title": "" }, { "docid": "c8478b6104aa725b6c3eb16270e9bd99", "text": "Industry is the part of an economy that produces material goods which are highly mechanized and automatized. Ever since the beginning of industrialization, technological leaps have led to paradigm shifts which today are ex-post named “industrial revolutions”: in the field of mechanization (the so-called 1st industrial revolution), of the intensive use of electrical energy (the so-called 2nd industrial revolution), and of the widespread digitalization (the so-called 3rd industrial revolution). On the basis of an advanced digitalization within factories, the combination of Internet technologies and future-oriented technologies in the field of “smart” objects (machines and products) seems to result in a new fundamental paradigm shift in industrial production. The vision of future production contains modular and efficient manufacturing systems and characterizes scenarios in which products control their own manufacturing process. This is supposed to realize the manufacturing of individual products in a batch size of one while maintaining the economic conditions of mass production. Tempted by this future expectation, the term “Industry 4.0” was established exante for a planned “4th industrial revolution”, the term being a reminiscence of software versioning. Decisive for the fast spread was the recommendation for implementation to the German Government, which carried the term in its title and was picked up willingly by the Federal Ministry of Education and Research and has become an eponym for a future project in the context of the high-tech strategy 2020. Currently an industrial platform consisting of three well-known industry associations named “Industry 4.0” is contributing to the dispersion of the term. Outside of the German-speaking area, the term is not common. In this paper the term “Industry 4.0” describes a future project that can be defined by two development directions. On the one hand there is a huge applicationpull, which induces a remarkable need for changes due to changing operative framework conditions. Triggers for this are general social, economic, and political changes. Those are in particular: Short development periods: Development periods and innovation periods need to be shortened. High innovation capability is becoming an essential success factor for many enterprises (“time to market”). Individualization on demand: A change from a seller’s into a buyer’s market has been becoming apparent for decades now, which means buyers can define the conditions of the trade. This trend leads to an increasing individualization of products and in extreme cases to individual products. This is also called “batch size one”. Flexibility: Due to the new framework requirements, higher flexibility in product development, especially in production, is necessary. Decentralization: To cope with the specified conditions, faster decisionmaking procedures are necessary. For this, organizational hierarchies need to be reduced. Resource efficiency: Increasing shortage and the related increase of prices for resources as well as social change in the context of ecological aspects require a more intensive focus on sustainability in industrial contexts. The aim is an economic and ecological increase in efficiency. On the other hand, there is an exceptional technology-push in industrial practice. This technology-push has already influenced daily routine in private areas. Buzzwords are Web 2.0, Apps,", "title": "" }, { "docid": "d2f71960706eabfa2a4800f7ccb5d7b2", "text": "Mixed land use refers to the effort of putting residential, commercial and recreational uses in close proximity to one another. This can contribute economic benefits, support viable public transit, and enhance the perceived security of an area. It is naturally promising to investigate how to rank real estate from the viewpoint of diverse mixed land use, which can be reflected by the portfolio of community functions in the observed area. To that end, in this paper, we develop a geographical function ranking method, named FuncDivRank, by incorporating the functional diversity of communities into real estate appraisal. Specifically, we first design a geographic function learning model to jointly capture the correlations among estate neighborhoods, urban functions, temporal effects, and user mobility patterns. In this way we can learn latent community functions and the corresponding portfolios of estates from human mobility data and Point of Interest (POI) data. Then, we learn the estate ranking indicator by simultaneously maximizing ranking consistency and functional diversity, in a unified probabilistic optimization framework. Finally, we conduct a comprehensive evaluation with real-world data. The experimental results demonstrate the enhanced performance of the proposed method for real estate appraisal.", "title": "" }, { "docid": "8bb0e19b03468313a52a1800a56f21db", "text": "DeSR is a statistical transition-based dependency parser which learns from annotated corpora which actions to perform for building parse trees while scanning a sentence. We describe recent improvements to the parser, in particular stacked parsing, exploiting a beam search strategy and using a Multilayer Perceptron classifier. For the Evalita 2009 Dependency Parsing task DesR was configured to use a combination of stacked parsers. The stacked combination achieved the best accuracy scores in both the main and pilot subtasks. The contribution to the result of various choices is analyzed, in particular for taking advantage of the peculiar features of the TUT Treebank.", "title": "" }, { "docid": "65d1578555a3d29ddc7daf1dd48c8e06", "text": "Overhangs have been a major stumbling block in the context of terrain synthesis models. These models resort invariably to a heightfield paradigm, which immediately precludes the existence of any type of overhang. This article presents a new technique for the generation of surfaces, with the ability to model overhangs in a procedural way. This technique can be used generally to model landscape elements, not only terrains but also the surface of the sea. The technique applies non-linear deformations to an initial heightfield surface. The deformations occur after the surface has been displaced along some specified vector field. The method is conceptually simple and enhances greatly the class of landscapes synthesized with procedural models.", "title": "" }, { "docid": "3b6e3884a9d3b09d221d06f3dea20683", "text": "Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data – as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN’s kernels. We approximate our model’s intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-theart results for CIFAR-10.", "title": "" }, { "docid": "58bc5fb67cfb5e4b623b724cb4283a17", "text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.", "title": "" } ]
scidocsrr
86ecb9e9a707d7aec99232f2d9d3aba7
Investigating factors influencing local government decision makers while adopting integration technologies (IntTech)
[ { "docid": "82fa51c143159f2b85f9d2e5b610e30d", "text": "Strategies are systematic and long-term approaches to problems. Federal, state, and local governments are investing in the development of strategies to further their e-government goals. These strategies are based on their knowledge of the field and the relevant resources available to them. Governments are communicating these strategies to practitioners through the use of practical guides. The guides provide direction to practitioners as they consider, make a case for, and implement IT initiatives. This article presents an analysis of a selected set of resources government practitioners use to guide their e-government efforts. A selected review of current literature on the challenges to information technology initiatives is used to create a framework for the analysis. A gap analysis examines the extent to which IT-related research is reflected in the practical guides. The resulting analysis is used to identify a set of commonalities across the practical guides and a set of recommendations for future development of practitioner guides and future research into e-government initiatives. D 2005 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "d62c4280bbef1039a393e6949a164946", "text": "Purpose – Achieving goals of better integrated and responsive government services requires moving away from stand alone applications toward more comprehensive, integrated architectures. As a result there is a mounting pressure to integrate disparate systems to support information exchange and cross-agency business processes. There are substantial barriers that governments must overcome to achieve these goals and to profit from enterprise application integration (EAI). Design/methodology/approach – In the research presented here we develop and test a methodology aimed at overcoming the barriers blocking adoption of EAI. This methodology is based on a discrete-event simulation of public sector structure, business processes and applications in combination with an EAI perspective. Findings – The testing suggests that our methodology helps to provide insight into the myriad of existing applications, and the implications of EAI. Moreover, it helps to identify novel options, gain stakeholder commitment, let them agree on the sharing of EAI costs, and finally it supports collaborative decision-making between public agencies. Practical implications – The approach is found to be useful for making the business case for EAI projects, and gaining stakeholder commitment prior to implementation. Originality/value – The joint addressing of the barriers of public sector reform including the transformation of the public sector structure, gaining of stakeholders’ commitment, understanding EAI technology and dependencies between cross-agency business processes, and a fair division of costs and benefits over stakeholders.", "title": "" } ]
[ { "docid": "77045e77d653bfa37dfbd1a80bb152da", "text": "We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training.", "title": "" }, { "docid": "b13a03598044db36ecf4634317071b34", "text": "Space Religion Encryption Sport Science space god encryption player science satellite atheism device hall theory april exist technology defensive scientific sequence atheist protect team universe launch moral americans average experiment president existence chip career observation station marriage use league evidence radar system privacy play exist training parent industry bob god committee murder enforcement year mistake", "title": "" }, { "docid": "8433df9d46df33f1389c270a8f48195d", "text": "BACKGROUND\nFingertip injuries involve varying degree of fractures of the distal phalanx and nail bed or nail plate disruptions. The treatment modalities recommended for these injuries include fracture fixation with K-wire and meticulous repair of nail bed after nail removal and later repositioning of nail or stent substitute into the nail fold by various methods. This study was undertaken to evaluate the functional outcome of vertical figure-of-eight tension band suture for finger nail disruptions with fractures of distal phalanx.\n\n\nMATERIALS AND METHODS\nA series of 40 patients aged between 4 and 58 years, with 43 fingernail disruptions and fracture of distal phalanges, were treated with vertical figure-of-eight tension band sutures without formal fixation of fracture fragments and the results were reviewed. In this method, the injuries were treated by thoroughly cleaning the wound, reducing the fracture fragments, anatomical replacement of nail plate, and securing it by vertical figure-of-eight tension band suture.\n\n\nRESULTS\nAll patients were followed up for a minimum of 3 months. The clinical evaluation of the patients was based on radiological fracture union and painless pinch to determine fingertip stability. Every single fracture united and every fingertip was clinically stable at the time of final followup. We also evaluated our results based on visual analogue scale for pain and range of motion of distal interphalangeal joint. Two sutures had to be revised due to over tensioning and subsequent vascular compromise within minutes of repair; however, this did not affect the final outcome.\n\n\nCONCLUSION\nThis technique is simple, secure, and easily reproducible. It neither requires formal repair of injured nail bed structures nor fixation of distal phalangeal fracture and results in uncomplicated reformation of nail plate and uneventful healing of distal phalangeal fractures.", "title": "" }, { "docid": "03c78195651c965219394117cfafcabc", "text": "Cognitive radio technology, a revolutionary communication paradigm that can utilize the existing wireless spectrum resources more efficiently, has been receiving a growing attention in recent years. As network users need to adapt their operating parameters to the dynamic environment, who may pursue different goals, traditional spectrum sharing approaches based on a fully cooperative, static, and centralized network environment are no longer applicable. Instead, game theory has been recognized as an important tool in studying, modeling, and analyzing the cognitive interaction process. In this tutorial survey, we introduce the most fundamental concepts of game theory, and explain in detail how these concepts can be leveraged in designing spectrum sharing protocols, with an emphasis on state-of-the-art research contributions in cognitive radio networking. Research challenges and future directions in game theoretic modeling approaches are also outlined. This tutorial survey provides a comprehensive treatment of game theory with important applications in cognitive radio networks, and will aid the design of efficient, self-enforcing, and distributed spectrum sharing schemes in future wireless networks. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "94b061285a0ca52aa0e82adcca392416", "text": "Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. In this paper we analyze a stochastic version of NGD and prove its convergence to a global minimum for a wider class of functions: we require the functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens the concept of unimodality to multidimensions and allows for certain types of saddle points, which are a known hurdle for first-order optimization methods such as gradient descent. Locally-Lipschitz functions are only required to be Lipschitz in a small region around the optimum. This assumption circumvents gradient explosion, which is another known hurdle for gradient descent variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic normalized gradient descent algorithm provably requires a minimal minibatch size.", "title": "" }, { "docid": "3b9adb452f628a3cf5153b80f1977bc4", "text": "Small signal stability analysis is conducted considering grid connected doubly-fed induction generator (DFIG) type. The modeling of a grid connected DFIG system is first set up and the whole model is formulated by a set of differential algebraic equations (DAE). Then, the mathematical model of rotor-side converter is built with decoupled P-Q control techniques to implement stator active and reactive powers control. Based on the abovementioned researches, the small signal stability analysis is carried out to explore and compared the differences between the whole system with the decoupled P-Q controller or not by eigenvalues and participation factors. Finally, numerical results demonstrate the system are stable, especially some conclusions and comments of interest are made. DFIG model; decoupled P-Q control; DAE; small signal analysis;", "title": "" }, { "docid": "ed4050c6934a5a26fc377fea3eefa3bc", "text": "This paper presents the design of the permanent magnetic system for the wall climbing robot with permanent magnetic tracks. A proposed wall climbing robot with permanent magnetic adhesion mechanism for inspecting the oil tanks is briefly put forward, including the mechanical system architecture. The permanent magnetic adhesion mechanism and the tracked locomotion mechanism are employed in the robot system. By static and dynamic force analysis of the robot, design parameters about adhesion mechanism are derived. Two types of the structures of the permanent magnetic units are given in the paper. The analysis of those two types of structure is also detailed. Finally, two wall climbing robots equipped with those two different magnetic systems are discussed and the experiments are included in the paper.", "title": "" }, { "docid": "97a458ead2bd94775c7d27a6a47ce8e6", "text": "This article presents an approach to using cognitive models of narrative discourse comprehension to define an explicit computational model of a reader’s comprehension process during reading, predicting aspects of narrative focus and inferencing with precision. This computational model is employed in a narrative discourse generation system to select and sequence content from a partial plan representing story world facts, objects, and events, creating discourses that satisfy comprehension criteria. Cognitive theories of narrative discourse comprehension define explicit models of a reader’s mental state during reading. These cognitive models are created to test hypotheses and explain empirical results about reader comprehension, but do not often contain sufficient precision for implementation on a computer. Therefore, they have not previously been suitable for computational narrative generation. The results of three experiments are presented and discussed, exhibiting empirical support for the approach presented. This work makes a number of contributions that advance the state-of-the-art in narrative discourse generation: a formal model of narrative focus, a formal model of online inferencing in narrative, a method of selecting narrative discourse content to satisfy comprehension criteria, and both implementation and evaluation of these models. .................................................................................................................................................................................", "title": "" }, { "docid": "45bd038dd94d388f945c041e7c04b725", "text": "Entomophagy is widespread among nonhuman primates and is common among many human communities. However, the extent and patterns of entomophagy vary substantially both in humans and nonhuman primates. Here we synthesize the literature to examine why humans and other primates eat insects and what accounts for the variation in the extent to which they do so. Variation in the availability of insects is clearly important, but less understood is the role of nutrients in entomophagy. We apply a multidimensional analytical approach, the right-angled mixture triangle, to published data on the macronutrient compositions of insects to address this. Results showed that insects eaten by humans spanned a wide range of protein-to-fat ratios but were generally nutrient dense, whereas insects with high protein-to-fat ratios were eaten by nonhuman primates. Although suggestive, our survey exposes a need for additional, standardized, data.", "title": "" }, { "docid": "ac7156831175817cc9c0e81d2f0bb980", "text": "Social networking sites (SNS) have become a significant component of people’s daily lives and have revolutionized the ways that business is conducted, from product development and marketing to operation and human resource management. However, there have been few systematic studies that ask why people use such systems. To try to determine why, we proposed a model based on uses and gratifications theory. Hypotheses were tested using PLS on data collected from 148 SNS users. We found that user utilitarian (rational and goal-oriented) gratifications of immediate access and coordination, hedonic (pleasure-oriented) gratifications of affection and leisure, and website social presence were positive predictors of SNS usage. While prior research focused on the hedonic use of SNS, we explored the predictive value of utilitarian factors in SNS. Based on these findings, we suggest a need to focus on the SNS functionalities to provide users with both utilitarian and hedonic gratifications, and suggest incorporating appropriate website features to help users evoke a sense of human contact in the SNS context.", "title": "" }, { "docid": "beca7993e709b58788a4513893b14413", "text": "We present a micro-traffic simulation (named “DeepTraffic”) where the perception, control, and planning systems for one of the cars are all handled by a single neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of DQN variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space with the objective of their neural network submission to make it onto the top-10 leaderboard.", "title": "" }, { "docid": "0bbc77e1269a49be659a777c390d408a", "text": "With the evolution of Telco systems towards 5G, new requirements emerge for delivering services. Network services are expected to be designed to allow greater flexibility. In order to cope with the new users' requirements, Telcos should rethink their complex and monolithic network architectures into more agile architectures. Adoption of NFV as well as micro-services patterns are opportunities promising such an evolution. However, to gain in flexibility, it is crucial to satisfy structural requirements for the design of VNFs as services. We present in this paper an approach for designing VNF-asa-Service. With this approach, we define design requirements for the service architecture and the service logic of VNFs. As Telcos have adopted IMS as the de facto platform for service delivery in 3G and even 4G systems, it is interesting to study its evolution for 5G towards a microservices-based architecture with an optimal design. Therefore, we consider IMS as a case of study to illustrate the proposed approach. We present new functional entities for IMS-as-a-Service through a functional decomposition of legacy network functions. We have developed and implemented IMS-as-a-Service with respect to the proposed requirements. We consider a service scenario where we focus on authentication and authorization procedures. We evaluate the involved microservices comparing to the state-of-the-art. Finally, we discuss our results and highlight the advantages of our approach.", "title": "" }, { "docid": "f0ae0c563ce34478dae8a2315624d6d2", "text": "Nanocrystalline cellulose (NCC) is an emerging renewable nanomaterial that holds promise in many different applications, such as in personal care, chemicals, foods, pharmaceuticals, etc. By appropriate modification of NCC, various functional nanomaterials with outstanding properties, or significantly improved physical, chemical, biological, as well as electronic properties can be developed. The nanoparticles are stabilised in aqueous suspension by negative charges on the surface, which are produced during the acid hydrolysis process. NCC suspensions can form a chiral nematic ordered phase beyond a critical concentration, i.e. NCC suspensions transform from an isotropic to an anisotropic chiral nematic liquid crystalline phase. Due to its nanoscale dimension and intrinsic physicochemical properties, NCC is a promising renewable biomaterial that can be used as a reinforcing component in high performance nanocomposites. Many new nanocomposite materials with attractive properties were obtained by the physical incorporation of NCC into a natural or synthetic polymeric matrix. Simple chemical modification on NCC surface can improve its dispersability in different solvents and expand its utilisation in nano-related applications, such as drug delivery, protein immobilisation, and inorganic reaction template. This review paper provides an overview on this emerging nanomaterial, focusing on the surface modification, properties and applications of NCC.", "title": "" }, { "docid": "6f1da2d00f63cae036db04fd272b8ef2", "text": "Female genital cosmetic surgery is surgery performed on a woman within a normal range of variation of human anatomy. The issues are heightened by a lack of long-term and substantive evidence-based literature, conflict of interest from personal financial gain through performing these procedures, and confusion around macroethical and microethical domains. It is a source of conflict and controversy globally because the benefit and harm of offering these procedures raise concerns about harmful cultural views, education, and social vulnerability of women with regard to both ethics and human rights. The rights issues of who is defining normal female anatomy and function, as well as the economic vulnerability of women globally, bequeath the profession a greater responsibility to ensure that there is adequate health and general education-not just among patients but broadly in society-that there is neither limitation nor interference in the decision being made, and that there are no psychological disorders that could be influencing such choices.", "title": "" }, { "docid": "566412870c83e5e44fabc50487b9d994", "text": "The influence of technology in the field of gambling innovation continues to grow at a rapid pace. After a brief overview of gambling technologies and deregulation issues, this review examines the impact of technology on gambling by highlighting salient factors in the rise of Internet gambling (i.e., accessibility, affordability, anonymity, convenience, escape immersion/dissociation, disinhibition, event frequency, asociability, interactivity, and simulation). The paper also examines other factors in relation to Internet gambling including the relationship between Internet addiction and Internet gambling addiction. The paper ends by overviewing some of the social issues surrounding Internet gambling (i.e., protection of the vulnerable, Internet gambling in the workplace, electronic cash, and unscrupulous operators). Recommendations for Internet gambling operators are also provided.", "title": "" }, { "docid": "7457c09c1068ba1397f468879bc3b0d1", "text": "Genome editing has potential for the targeted correction of germline mutations. Here we describe the correction of the heterozygous MYBPC3 mutation in human preimplantation embryos with precise CRISPR–Cas9-based targeting accuracy and high homology-directed repair efficiency by activating an endogenous, germline-specific DNA repair response. Induced double-strand breaks (DSBs) at the mutant paternal allele were predominantly repaired using the homologous wild-type maternal gene instead of a synthetic DNA template. By modulating the cell cycle stage at which the DSB was induced, we were able to avoid mosaicism in cleaving embryos and achieve a high yield of homozygous embryos carrying the wild-type MYBPC3 gene without evidence of off-target mutations. The efficiency, accuracy and safety of the approach presented suggest that it has potential to be used for the correction of heritable mutations in human embryos by complementing preimplantation genetic diagnosis. However, much remains to be considered before clinical applications, including the reproducibility of the technique with other heterozygous mutations.", "title": "" }, { "docid": "bc90b1e4d456ca75b38105cc90d7d51d", "text": "Choosing a cloud storage system and specific operations for reading and writing data requires developers to make decisions that trade off consistency for availability and performance. Applications may be locked into a choice that is not ideal for all clients and changing conditions. Pileus is a replicated key-value store that allows applications to declare their consistency and latency priorities via consistency-based service level agreements (SLAs). It dynamically selects which servers to access in order to deliver the best service given the current configuration and system conditions. In application-specific SLAs, developers can request both strong and eventual consistency as well as intermediate guarantees such as read-my-writes. Evaluations running on a worldwide test bed with geo-replicated data show that the system adapts to varying client-server latencies to provide service that matches or exceeds the best static consistency choice and server selection scheme.", "title": "" }, { "docid": "8be957572c846ddda107d8343094401b", "text": "Corporate accounting statements provide financial markets, and tax services with valuable data on the economic health of companies, although financial indices are only focused on a very limited part of the activity within the company. Useful tools in the field of processing extended financial and accounting data are the methods of Artificial Intelligence, aiming the efficient delivery of financial information to tax services, investors, and financial markets where lucrative portfolios can be created. Key-words: Financial Indices, Artificial Intelligence, Data Mining, Neural Networks, Genetic Algorithms", "title": "" }, { "docid": "282a6b06fb018fb7e2ec223f74345944", "text": "The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIPPER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA.", "title": "" }, { "docid": "e18b4c013b36e198349185be70396ea0", "text": "In 2004 and 2005, Coca-Cola Enterprises (CCE)—the world’s largest bottler and distributor of Coca-Cola products—implemented ORTEC’s vehicle-routing software. Today, over 300 CCE dispatchers use this software daily to plan the routes of approximately 10,000 trucks. In addition to handling nonstandard constraints, the implementation is notable for its progressive transition from the prior business practice. CCE has realized an annual cost saving of $45 million and major improvements in customer service. This approach has been so successful that Coca-Cola has extended it beyond CCE to other Coca-Cola bottling companies and beer distributors.", "title": "" } ]
scidocsrr
d6f6ef29d39924604fb09596eb6aeb37
An extension of the technology acceptance model in an ERP implementation environment
[ { "docid": "a4197ab8a70142ac331599c506996bc9", "text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.", "title": "" }, { "docid": "bd13f54cd08fe2626fe8de4edce49197", "text": "Ease of use and usefulness are believed to be fundamental in determining the acceptance and use of various, corporate ITs. These beliefs, however, may not explain the user's behavior toward newly emerging ITs, such as the World-Wide-Web (WWW). In this study, we introduce playfulness as a new factor that re ̄ects the user's intrinsic belief in WWW acceptance. Using it as an intrinsic motivation factor, we extend and empirically validate the Technology Acceptance Model (TAM) for the WWW context. # 2001 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "37b97f66230fb292f585d0413af48986", "text": "In this paper, we notice that sparse and low-rank structures arise in the context of many collaborative filtering applications where the underlying graphs have block-diagonal adjacency matrices. Therefore, we propose a novel Sparse and Low-Rank Linear Method (Lor SLIM) to capture such structures and apply this model to improve the accuracy of the Top-N recommendation. Precisely, a sparse and low-rank aggregation coefficient matrix W is learned from Lor SLIM by solving an l1-norm and nuclear norm regularized optimization problem. We also develop an efficient alternating augmented Lagrangian method (ADMM) to solve the optimization problem. A comprehensive set of experiments is conducted to evaluate the performance of Lor SLIM. The experimental results demonstrate the superior recommendation quality of the proposed algorithm in comparison with current state-of-the-art methods.", "title": "" }, { "docid": "25f0871346c370db4b26aecd08a9d75e", "text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.", "title": "" }, { "docid": "aeed0f9595c9b40bb03c95d4624dd21c", "text": "Most research in primary and secondary computing education has focused on understanding learners within formal classroom communities, leaving aside the growing number of promising informal online programming communities where young learners contribute, comment, and collaborate on programs. In this paper, we examined trends in computational participation in Scratch, an online community with over 1 million registered youth designers primarily 11-18 years of age. Drawing on a random sample of 5,000 youth programmers and their activities over three months in early 2012, we examined the quantity of programming concepts used in projects in relation to level of participation, gender, and account age of Scratch programmers. Latent class analyses revealed four unique groups of programmers. While there was no significant link between level of online participation, ranging from low to high, and level of programming sophistication, the exception was a small group of highly engaged users who were most likely to use more complex programming concepts. Groups who only used few of the more sophisticated programming concepts, such as Booleans, variables and operators, were identified as Scratch users new to the site and girls. In the discussion we address the challenges of analyzing young learners' programming in informal online communities and opportunities for designing more equitable computational participation.", "title": "" }, { "docid": "9f21af3bc0955dcd9a05898f943f54ad", "text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.", "title": "" }, { "docid": "981634bc9b96eba12fd07e8960d02c2d", "text": "This paper presents the existing legal frameworks, professional guidelines and other documents related to the conditions and extent of the disclosure of genetic information by physicians to at-risk family members. Although the duty of a physician regarding disclosure of genetic information to a patient’s relatives has only been addressed by few legal cases, courts have found such a duty under some circumstances. Generally, disclosure should not be permitted without the patient’s consent. Yet, due to the nature of genetic information, exceptions are foreseen, where treatment and prevention are available. This duty to warn a patient’s relative is also supported by some professional and policy organizations that have addressed the issue. Practice guidelines with a communication and intervention plan are emerging, providing physicians with tools that allow them to assist patients in their communication with relatives without jeopardizing their professional liability. Since guidelines aim to improve the appropriateness of medical practice and consequently to better serve the interests of patients, it is important to determine to what degree they document the ‘best practice’ standards. Such an analysis is an essential step to evaluate the different approaches permitting the disclosure of genetic information to family members.", "title": "" }, { "docid": "6897b2842b041e75278aec7bc03ec870", "text": "PURPOSE\nThe optimal treatment of systemic sclerosis (SSc) is a challenge because the pathogenesis of SSc is unclear and it is an uncommon and clinically heterogeneous disease affecting multiple organ systems. The aim of the European League Against Rheumatism (EULAR) Scleroderma Trials and Research group (EUSTAR) was to develop evidence-based, consensus-derived recommendations for the treatment of SSc.\n\n\nMETHODS\nTo obtain and maintain a high level of intrinsic quality and comparability of this approach, EULAR standard operating procedures were followed. The task force comprised 18 SSc experts from Europe, the USA and Japan, two SSc patients and three fellows for literature research. The preliminary set of research questions concerning SSc treatment was provided by 74 EUSTAR centres.\n\n\nRESULTS\nBased on discussion of the clinical research evidence from published literature, and combining this with current expert opinion and clinical experience, 14 recommendations for the treatment of SSc were formulated. The final set includes the following recommendations: three on SSc-related digital vasculopathy (Raynaud's phenomenon and ulcers); four on SSc-related pulmonary arterial hypertension; three on SSc-related gastrointestinal involvement; two on scleroderma renal crisis; one on SSc-related interstitial lung disease and one on skin involvement. Experts also formulated several questions for a future research agenda.\n\n\nCONCLUSIONS\nEvidence-based, consensus-derived recommendations are useful for rheumatologists to help guide treatment for patients with SSc. These recommendations may also help to define directions for future clinical research in SSc.", "title": "" }, { "docid": "2c2574e1eb29ad45bedf346417c85e2d", "text": "Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol.", "title": "" }, { "docid": "557694b6db3f20adc700876d75ad7720", "text": "Unseen Action Recognition (UAR) aims to recognise novel action categories without training examples. While previous methods focus on inner-dataset seen/unseen splits, this paper proposes a pipeline using a large-scale training source to achieve a Universal Representation (UR) that can generalise to a more realistic Cross-Dataset UAR (CDUAR) scenario. We first address UAR as a Generalised Multiple-Instance Learning (GMIL) problem and discover 'building-blocks' from the large-scale ActivityNet dataset using distribution kernels. Essential visual and semantic components are preserved in a shared space to achieve the UR that can efficiently generalise to new datasets. Predicted UR exemplars can be improved by a simple semantic adaptation, and then an unseen action can be directly recognised using UR during the test. Without further training, extensive experiments manifest significant improvements over the UCF101 and HMDB51 benchmarks.", "title": "" }, { "docid": "3d401d8d3e6968d847795ccff4646b43", "text": "In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks.", "title": "" }, { "docid": "ca509048385b8cf28bd7b89c685f21b2", "text": "Recent studies on knowledge base completion, the task of recovering missing relationships based on recorded relations, demonstrate the importance of learning embeddings from multi-step relations. However, due to the size of knowledge bases, learning multi-step relations directly on top of observed instances could be costly. In this paper, we propose Implicit ReasoNets (IRNs), which is designed to perform large-scale inference implicitly through a search controller and shared memory. Unlike previous work, IRNs use training data to learn to perform multi-step inference through the shared memory, which is also jointly updated during training. While the inference procedure is not operating on top of observed instances for IRNs, our proposed model outperforms all previous approaches on the popular FB15k benchmark by more than 5.7%.", "title": "" }, { "docid": "16a0329d2b7a6995a48bdef0e845658a", "text": "Digital market has never been so unstable due to more and more demanding users and new disruptive competitors. CEOs from most of industries investigate digitalization opportunities. Through a Systematic Literature Review, we found that digital transformation is more than just a technological shift. According to this study, these transformations have had an impact on the business models, the operational processes and the end-users experience. Considering the richness of this topic, we had proposed a research agenda of digital transformation in a managerial perspective.", "title": "" }, { "docid": "05a5e3849c9fca4d788aa0210d8f7294", "text": "The growth of mobile phone users has lead to a dramatic increasing of SMS spam messages. Recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. In practice, fighting such plague is difficult by several factors, including the lower rate of SMS that has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software. Probably, one of the major concerns in academic settings is the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. Moreover, traditional content-based filters may have their performance seriously degraded since SMS messages are fairly short and their text is generally rife with idioms and abbreviations. In this paper, we present details about a new real, public and non-encoded SMS spam collection that is the largest one as far as we know. Moreover, we offer a comprehensive analysis of such dataset in order to ensure that there are no duplicated messages coming from previously existing datasets, since it may ease the task of learning SMS spam classifiers and could compromise the evaluation of methods. Additionally, we compare the performance achieved by several established machine learning techniques. In summary, the results indicate that the procedure followed to build the collection does not lead to near-duplicates and, regarding the classifiers, the Support Vector Machines outperforms other evaluated techniques and, hence, it can be used as a good baseline for further comparison. Keywords—Mobile phone spam; SMS spam; spam filtering; text categorization; classification.", "title": "" }, { "docid": "bc2bc8b2d9db3eb14e126c627248a66a", "text": "With the growing complexity of today's software applications injunction with the increasing competitive pressure has pushed the quality assurance of developed software towards new heights. Software testing is an inevitable part of the Software Development Lifecycle, and keeping in line with its criticality in the pre and post development process makes it something that should be catered with enhanced and efficient methodologies and techniques. This paper aims to discuss the existing as well as improved testing techniques for the better quality assurance purposes.", "title": "" }, { "docid": "11806624e22ec2b72cd692755e8b2764", "text": "The improvement of file access performance is a great challenge in real-time cloud services. In this paper, we analyze preconditions of dealing with this problem considering the aspects of requirements, hardware, software, and network environments in the cloud. Then we describe the design and implementation of a novel distributed layered cache system built on the top of the Hadoop Distributed File System which is named HDFS-based Distributed Cache System (HDCache). The cache system consists of a client library and multiple cache services. The cache services are designed with three access layers an in-memory cache, a snapshot of the local disk, and the actual disk view as provided by HDFS. The files loading from HDFS are cached in the shared memory which can be directly accessed by a client library. Multiple applications integrated with a client library can access a cache service simultaneously. Cache services are organized in the P2P style using a distributed hash table. Every file cached has three replicas in different cache service nodes in order to improve robustness and alleviates the workload. Experimental results show that the novel cache system can store files with a wide range in their sizes and has the access performance in a millisecond level in highly concurrent environments.", "title": "" }, { "docid": "09baf9c55e7ae35bdcf88742ecdc01d5", "text": "This paper presents the experimental evaluation of a Bluetooth-based positioning system. The method has been implemented in a Bluetooth-capable handheld device. Empirical tests of the developed considered positioning system have been realized in different indoor scenarios. The range estimation of the positioning system is based on an approximation of the relation between the RSSI (Radio Signal Strength Indicator) and the associated distance between sender and receiver. The actual location estimation is carried out by using the triangulation method. The implementation of the positioning system in a PDA (Personal Digital Assistant) has been realized by using the Software “Microsoft eMbedded Visual C++ Version 3.0”.", "title": "" }, { "docid": "6c829f1d93b0b943065bafab433e61b9", "text": "recognition by using the Mel-Scale Frequency Cepstral Coefficients (MFCC) extracted from speech signal of spoken words. Principal Component Analysis is employed as the supplement in feature dimensional reduction state, prior to training and testing speech samples via Maximum Likelihood Classifier (ML) and Support Vector Machine (SVM). Based on experimental database of total 40 times of speaking words collected under acoustically controlled room, the sixteen-ordered MFCC extracts have shown the improvement in recognition rates significantly when training the SVM with more MFCC samples by randomly selected from database, compared with the ML.", "title": "" }, { "docid": "bfe62c8e438ff5ec697203295e658450", "text": "Using the qualitative participatory action methodology, collective memory work, this study explored how transgender, queer, and questioning (TQQ) youth make meaning of their sexual orientation and gender identity through high school experiences. Researchers identified three major conceptual but overlapping themes from the data generated in the transgender, queer, and questioning youth focus group: a need for resilience, you should be able to be safe, and this is what action looks like! The researchers discuss how as a research product, a documentary can effectively \"capture voices\" of participants, making research accessible and attractive to parents, practitioners, policy makers, and participants.", "title": "" }, { "docid": "cbaff0ba24a648e8228a7663e3d32e97", "text": "Microservice architecture has started a new trend for application development/deployment in cloud due to its flexibility, scalability, manageability and performance. Various microservice platforms have emerged to facilitate the whole software engineering cycle for cloud applications from design, development, test, deployment to maintenance. In this paper, we propose a performance analytical model and validate it by experiments to study the provisioning performance of microservice platforms. We design and develop a microservice platform on Amazon EC2 cloud using Docker technology family to identify important elements contributing to the performance of microservice platforms. We leverage the results and insights from experiments to build a tractable analytical performance model that can be used to perform what-if analysis and capacity planning in a systematic manner for large scale microservices with minimum amount of time and cost.", "title": "" }, { "docid": "241a1589619c2db686675327cab1e8da", "text": "This paper describes a simple computational model of joint torque and impedance in human arm movements that can be used to simulate three-dimensional movements of the (redundant) arm or leg and to design the control of robots and human-machine interfaces. This model, based on recent physiological findings, assumes that (1) the central nervous system learns the force and impedance to perform a task successfully in a given stable or unstable dynamic environment and (2) stiffness is linearly related to the magnitude of the joint torque and increased to compensate for environment instability. Comparison with existing data shows that this simple model is able to predict impedance geometry well.", "title": "" }, { "docid": "8390fd7e559832eea895fabeb48c3549", "text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear", "title": "" } ]
scidocsrr
fd01c6a98a6b9a5cbdc61ae7fc963fa3
Heterogeneous Vehicular Networking: A Survey on Architecture, Challenges, and Solutions
[ { "docid": "4d66a85651a78bfd4f7aba290c21f9a7", "text": "Mobile carrier networks follow an architecture where network elements and their interfaces are defined in detail through standardization, but provide limited ways to develop new network features once deployed. In recent years we have witnessed rapid growth in over-the-top mobile applications and a 10-fold increase in subscriber traffic while ground-breaking network innovation took a back seat. We argue that carrier networks can benefit from advances in computer science and pertinent technology trends by incorporating a new way of thinking in their current toolbox. This article introduces a blueprint for implementing current as well as future network architectures based on a software-defined networking approach. Our architecture enables operators to capitalize on a flow-based forwarding model and fosters a rich environment for innovation inside the mobile network. In this article, we validate this concept in our wireless network research laboratory, demonstrate the programmability and flexibility of the architecture, and provide implementation and experimentation details.", "title": "" } ]
[ { "docid": "e31749775e64d5407a090f5fd0a275cf", "text": "This paper focuses on presenting a human-in-the-loop reinforcement learning theory framework and foreseeing its application to driving decision making. Currently, the technologies in human-vehicle collaborative driving face great challenges, and do not consider the Human-in-the-loop learning framework and Driving Decision-Maker optimization under the complex road conditions. The main content of this paper aimed at presenting a study framework as follows: (1) the basic theory and model of the hybrid reinforcement learning; (2) hybrid reinforcement learning algorithm for human drivers; (3)hybrid reinforcement learning algorithm for autopilot; (4) Driving decision-maker verification platform. This paper aims at setting up the human-machine hybrid reinforcement learning theory framework and foreseeing its solutions to two kinds of typical difficulties about human-machine collaborative Driving Decision-Maker, which provides the basic theory and key technologies for the future of intelligent driving. The paper serves as a potential guideline for the study of human-in-the-loop reinforcement learning.", "title": "" }, { "docid": "88c5bcaa173584042939f9b879aa5b3d", "text": "We present the old-but–new problem of data quality from a statistical perspective, in part with the goal of attracting more statisticians, especially academics, to become engaged in research on a rich set of exciting challenges. The data quality landscape is described, and its research foundations in computer science, total quality management and statistics are reviewed. Two case studies based on an EDA approach to data quality are used to motivate a set of research challenges for statistics that span theory, methodology and software tools.", "title": "" }, { "docid": "0cfda368edafe21e538f2c1d7ed75056", "text": "This paper presents high performance speaker identification and verification systems based on Gaussian mixture speaker models: robust, statistically based representations of speaker identity. The identification system is a maximum likelihood classifier and the verification system is a likelihood ratio hypothesis tester using background speaker normalization. The systems are evaluated on four publically available speech databases: TIMIT, NTIMIT, Switchboard and YOHO. The different levels of degradations and variabilities found in these databases allow the examination of system performance for different task domains. Constraints on the speech range from vocabulary-dependent to extemporaneous and speech quality varies from near-ideal, clean speech to noisy, telephone speech. Closed set identification accuracies on the 630 speaker TIMIT and NTIMIT databases were 99.5% and 60.7%, respectively. On a 113 speaker population from the Switchboard database the identification accuracy was 82.8%. Global threshold equal error rates of 0.24%, 7.19%, 5.15% and 0.51% were obtained in verification experiments on the TIMIT, NTIMIT, Switchboard and YOHO databases, respectively.", "title": "" }, { "docid": "09b94dbd60ec10aa992d67404f9687e9", "text": "It is increasingly acknowledged that many threats to an organisation’s computer systems can be attributed to the behaviour of computer users. To quantify these human-based information security vulnerabilities, we are developing the Human Aspects of Information Security Questionnaire (HAIS-Q). The aim of this paper was twofold. The first aim was to outline the conceptual development of the HAIS-Q, including validity and reliability testing. The second aim was to examine the relationship between knowledge of policy and procedures, attitude towards policy and procedures and behaviour when using a work computer. Results from 500 Australian employees indicate that knowledge of policy and procedures had a stronger influence on attitude towards policy and procedure than selfreported behaviour. This finding suggests that training and education will be more effective if it outlines not only what is expected (knowledge) but also provides an understanding of why this is important (attitude). Plans for future research to further develop and test the HAIS-Q are outlined. Crown Copyright a 2014 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cf3354d0a85ea1fa2431057bdf6b6d0f", "text": "Increasingly, scientific computing applications must accumulate and manage massive datasets, as well as perform sophisticated computations over these data. Such applications call for data-intensive scalable computer (DISC) systems, which differ in fundamental ways from existing high-performance computing systems.", "title": "" }, { "docid": "741dbabfa94b787f31bccf12471724a4", "text": "In this paper is proposed a Takagi-Sugeno Fuzzy controller (TSF) applied to the direct torque control scheme with space vector modulation. In conventional DTC-SVM scheme, two PI controllers are used to generate the reference stator voltage vector. To improve the drawback of this conventional DTC-SVM scheme is proposed the TSF controller to substitute both PI controllers. The proposed controller calculates the reference quadrature components of the stator voltage vector. The rule base for the proposed controller is defined in function of the stator flux error and the electromagnetic torque error using trapezoidal and triangular membership functions. Constant switching frequency and low torque ripple are obtained using space vector modulation technique. Performance of the proposed DTC-SVM with TSF controller is analyzed in terms of several performance measures such as rise time, settling time and torque ripple considering different operating conditions. The simulation results shown that the proposed scheme ensure fast torque response and low torque ripple validating the proposed scheme.", "title": "" }, { "docid": "d1d1b85b0675c59f01c61c6f144ee8a7", "text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.", "title": "" }, { "docid": "73080f337ae7ec5ef0639aec374624de", "text": "We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called \"Multi-Atlas Label Propagation with Expectation-Maximisation based refinement\" (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.", "title": "" }, { "docid": "727add0c0e44d0044d7f58b3633160d2", "text": "Case II: Deterministic transitions, continuous state Case III: “Mildly” stochastic trans., finite state: P(s,a,s’) ≥ 1 δ Case IV: Bounded-noise stochastic transitions, continuous state: st+1 = T(st, at) + wt , ||wt|| ≤ ∆ Planning and Learning in Environments with Delayed Feedback Thomas J. Walsh, Ali Nouri, Lihong Li, Michael L. Littman Rutgers Laboratory for Real Life Reinforcement Learning Computer Science Department, Rutgers University, Piscataway NJ", "title": "" }, { "docid": "5f3dc141b69eb50e17bdab68a2195e13", "text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.", "title": "" }, { "docid": "90a7849b9e71df0cb9c4b77c369592db", "text": "Social networking and microblogging services such as Twitter provide a continuous source of data from which useful information can be extracted. The detection and characterization of bursty words play an important role in processing such data, as bursty words might hint to events or trending topics of social importance upon which actions can be triggered. While there are several approaches to extract bursty words from the content of messages, there is only little work that deals with the dynamics of continuous streams of messages, in particular messages that are geo-tagged.\n In this paper, we present a framework to identify bursty words from Twitter text streams and to describe such words in terms of their spatio-temporal characteristics. Using a time-aware word usage baseline, a sliding window approach over incoming tweets is proposed to identify words that satisfy some burstiness threshold. For these words then a time-varying, spatial signature is determined, which primarily relies on geo-tagged tweets. In order to deal with the noise and the sparsity of geo-tagged tweets, we propose a novel graph-based regularization procedure that uses spatial cooccurrences of bursty words and allows for computing sound spatial signatures. We evaluate the functionality of our online processing framework using two real-world Twitter datasets. The results show that our framework can efficiently and reliably extract bursty words and describe their spatio-temporal evolution over time.", "title": "" }, { "docid": "e2a9bb49fd88071631986874ea197bc1", "text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.", "title": "" }, { "docid": "866b81f6d74164b9ef625a529b20a7b3", "text": "16 IEEE Spectrum | February 2006 | NA www.spectrum.ieee.org Millions of people around the world are tackling one of the hardest problems in computer science—without even knowing it. The logic game Sudoku is a miniature version of a longstanding mathematical challenge, and it entices both puzzlers, who see it as an enjoyable plaything, and researchers, who see it as a laboratory for algorithm design. Sudoku has become a worldwide puzzle craze within the past year. Previously known primarily in Japan, it now graces newspapers, Web sites, and best-selling books in dozens of countries [see photo, “Number Fad”]. A puzzle consists of a 9-by-9 grid made up of nine 3-by-3 subgrids. Digits appear in some squares, and based on these starting clues, a player completes the grid so that each row, column, and subgrid contains the digits 1 through 9 exactly once. An easy puzzle requires only simple logical techniques—if a subgrid needs an 8, say, and two of the columns running through it already hold an 8, then the subgrid’s 8 must go in the remaining column. A hard puzzle requires more complex pattern recognition skills; for instance, if a player computes all possible digits for each cell in a subgrid and notices that two cells have exactly the same two choices, those two digits can be eliminated from all other cells in the subgrid. No matter the difficulty level, however, a dedicated puzzler can eventually crack a 9-by-9 Sudoku game. A computer solves a 9-by-9 Sudoku within a second by using logical tricks that are similar to the ones humans use, but finishes much faster [see puzzle, “Challenge”]. On a large scale, however, such shortcuts are not powerful enough, and checking the explosive number of combinations becomes impossible, even for the world’s fastest computers. And no one knows of an algorithm that’s guaranteed to find a solution without trying out a huge number of combinations. This places Sudoku in an infamously difficult class, called NP-complete, that includes problems of great practical importance, such as scheduling, network routing, and gene sequencing. “The question of whether there exists an efficient algorithm for solving these problems is now on just about anyone’s list of the Top 10 unsolved problems in science and mathematics in the world,” says Richard Korf, a computer scientist at the University of California at Los Angeles. The challenge is known as P = NP, where, roughly speaking, P stands for tasks that can be solved efficiently, and NP stands for tasks whose solution can be verified efficiently. (For example, it is easy to verify whether a complete Sudoku is correctly filled in, even though the puzzle may take quite a lot of time to solve.) As a member of the NP-complete subset, NUMBER FAD: A reader examines a Sudoko puzzle in The Independent, London, last May.", "title": "" }, { "docid": "d7cd6978cfb8ef53567c3aab3c71d274", "text": "s computing technology increasingly becomes part of our daily activities, we are required to consider what is the future of computing and how will it change our lives? To address this question, we are interested in developing technologies that would allow for ubiquitous sensing and recognition of daily activities in an environment. Such environments will be aware of the activities performed within it and will be capable of supporting these activities without increasing the cognitive load on the users in the space. Toward this end, we are prototyping different types of smart and aware spaces, each supporting a different part of our daily life and each varying in function and detail. Our most significant effort in this direction is the building of the \" Aware Home \" at Georgia Tech. In this article, we outline the research issues we are pursuing toward the building of such smart and aware environments , and especially the Aware Home. We are interested in developing an infrastructure for ubiquitous sensing and recognition of activities in environments. Such sensing will be transparent to everyday activities, while providing the embedded computing infrastructure with an awareness of what is happening in a space. We expect such a ubiquitous sensing infrastructure to support different environments, with varying needs and complexities. These sensors can be mobile or static, configuring their sensing to suit the task at hand while sharing relevant information with other available sensors. This config-urable sensor-net will provide high-end sensory data about the status of the environment, its inhabitants, and the ongoing activities in the environment. To achieve this contextual knowledge of the space that is being sensed and to model the environment and the people within it requires methods for both low-level and high-level signal processing and interpretation. We are also building such signal-understanding methods to process the sensory data captured from these sensors and to model and recognize the space and activities in them. A significant aspect of building an aware environment is to explore easily accessible and more pervasive computing services than are available via traditional desktop computing. Computing and sensing in such environments must be reliable , persistent (always remains on), easy to interact with, and transparent (the user does not know it is there and does not need to search for it). The environment must be aware of the users it is interacting with and be capable of unencumbered and intelligent interaction. …", "title": "" }, { "docid": "d07b385e9732a273824897671b119196", "text": "Motivation: Progress in machine learning techniques has led to the development of various techniques well suited to online estimation and rapid aggregation of information. Theoretical models of marketmaking have led to price-setting equations for which solutions cannot be achieved in practice, whereas empirical work on algorithms for market-making has so far focused on sets of heuristics and rules that lack theoretical justification. We are developing algorithms that are theoretically justified by results in finance, and at the same time flexible enough to be easily extended by incorporating modules for dealing with considerations like portfolio risk and competition from other market-makers.", "title": "" }, { "docid": "c63c94a2c6cedb8f816edd3221b23261", "text": "THERAPEUTIC CHALLENGE Nodular scabies (NS) can involve persistent intensely pruriginous nodules for months after specific treatment of scabies. This condition results from a hypersensitivity reaction to retained mite parts or antigens, which is commonly treated with topical or intralesional steroids. However, the response to this treatment is unsatisfactory in certain patients. The scrotum and the shaft of the penis are frequently affected anatomic locations. High-potency topical steroids, tacrolimus, short-course oral prednisolone (Fig 1, A and B), and even intralesional triamcinolone injections might show unsatisfactory responses in certain patients, and nodules often relapse or persist.", "title": "" }, { "docid": "02138b6fea0d80a6c365cafcc071e511", "text": "Quantum scrambling is the dispersal of local information into many-body quantum entanglements and correlations distributed throughout an entire system. This concept accompanies the dynamics of thermalization in closed quantum systems, and has recently emerged as a powerful tool for characterizing chaos in black holes1–4. However, the direct experimental measurement of quantum scrambling is difficult, owing to the exponential complexity of ergodic many-body entangled states. One way to characterize quantum scrambling is to measure an out-of-time-ordered correlation function (OTOC); however, because scrambling leads to their decay, OTOCs do not generally discriminate between quantum scrambling and ordinary decoherence. Here we implement a quantum circuit that provides a positive test for the scrambling features of a given unitary process5,6. This approach conditionally teleports a quantum state through the circuit, providing an unambiguous test for whether scrambling has occurred, while simultaneously measuring an OTOC. We engineer quantum scrambling processes through a tunable three-qubit unitary operation as part of a seven-qubit circuit on an ion trap quantum computer. Measured teleportation fidelities are typically about 80 per cent, and enable us to experimentally bound the scrambling-induced decay of the corresponding OTOC measurement. A quantum circuit in an ion-trap quantum computer provides a positive test for the scrambling features of a given unitary process.", "title": "" }, { "docid": "a5c054899abf8aa553da4a576577678e", "text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.", "title": "" }, { "docid": "63f3147a04a23867d40d6ff4f65868cd", "text": "The chemistry of graphene oxide is discussed in this critical review. Particular emphasis is directed toward the synthesis of graphene oxide, as well as its structure. Graphene oxide as a substrate for a variety of chemical transformations, including its reduction to graphene-like materials, is also discussed. This review will be of value to synthetic chemists interested in this emerging field of materials science, as well as those investigating applications of graphene who would find a more thorough treatment of the chemistry of graphene oxide useful in understanding the scope and limitations of current approaches which utilize this material (91 references).", "title": "" }, { "docid": "a85b110c84174cb1d1744ecc558f12da", "text": "A link between mental disorder and decreased ability is commonly assumed, but evidence to the contrary also exists. In reviewing any association between creativity and mental disorder, our aim is not only to update the literature but also to include an epidemiological and theoretical discussion of the topic. For literature retrieval, we used Medline, PsycINFO, and manual literature searches. Studies are numerous: most are empirical, many having methodological difficulties and variations in definitions and concepts. There is little consensus. However, some trends are apparent. We found 13 major case series (over 100 cases), case-control studies, or population-based studies, with valid, reliable measures of mental disorders. The results of all but one of these studies supported the association, at least when concerning particular groups of mental disorders; the findings were somewhat unclear in two studies. Most of the remainder that are not included in our more detailed examination also show a fragile association between creativity and mental disorder, but the link is not apparent for all groups of mental disorders or for all forms of creativity. In conclusion, evidence exists to support some form of association between creativity and mental disorder, but the direction of any causal link remains obscure.", "title": "" } ]
scidocsrr
c9be471f6c38a4643ea8929312a9778a
Analytical study of security aspects in 6LoWPAN networks
[ { "docid": "a231d6254a136a40625728d7e14d7844", "text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.", "title": "" }, { "docid": "6c06e6656c8a6aefea1ce0d24e80aa44", "text": "If a wireless sensor network (WSN) is to be completely integrated into the Internet as part of the Internet of Things (IoT), it is necessary to consider various security challenges, such as the creation of a secure channel between an Internet host and a sensor node. In order to create such a channel, it is necessary to provide key management mechanisms that allow two remote devices to negotiate certain security credentials (e.g. secret keys) that will be used to protect the information flow. In this paper we will analyse not only the applicability of existing mechanisms such as public key cryptography and pre-shared keys for sensor nodes in the IoT context, but also the applicability of those link-layer oriented key management systems (KMS) whose original purpose is to provide shared keys for sensor nodes belonging to the same WSN. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "3776b7fdcd1460b60a18c87cd60b639e", "text": "A sketch is a probabilistic data structure that is used to record frequencies of items in a multi-set. Various types of sketches have been proposed in literature and applied in a variety of fields, such as data stream processing, natural language processing, distributed data sets etc. While several variants of sketches have been proposed in the past, existing sketches still have a significant room for improvement in terms of accuracy. In this paper, we propose a new sketch, called Slim-Fat (SF) sketch, which has a significantly higher accuracy compared to prior art, a much smaller memory footprint, and at the same time achieves the same speed as the best prior sketch. The key idea behind our proposed SF-sketch is to maintain two separate sketches: a small sketch called Slim-subsketch and a large sketch called Fat-subsketch. The Slim-subsketch, stored in the fast memory (SRAM), enables fast and accurate querying. The Fat-subsketch, stored in the relatively slow memory (DRAM), is used to assist the insertion and deletion from Slim-subsketch. We implemented and extensively evaluated SF-sketch along with several prior sketches and compared them side by side. Our experimental results show that SF-sketch outperforms the most commonly used CM-sketch by up to 33.1 times in terms of accuracy.", "title": "" }, { "docid": "872d06c4d3702d79cb1c7bcbc140881a", "text": "Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.\nExisting noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.", "title": "" }, { "docid": "9089a8cc12ffe163691d81e319ec0f25", "text": "Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”", "title": "" }, { "docid": "eabb50988aeb711995ff35833a47770d", "text": "Although chemistry is by far the largest scientific discipline according to any quantitative measure, it had, until recently, been virtually ignored by professional philosophers of science. They left both a vacuum and a one-sided picture of science tailored to physics. Since the early 1990s, the situation has changed drastically, such that philosophy of chemistry is now one of the most flourishing fields in the philosophy of science, like the philosophy of biology that emerged in the 1970s. This article narrates the development and provides a survey of the main topics and trends.", "title": "" }, { "docid": "ecaf322e67c43b7d54a05de495a443eb", "text": "Recently, considerable effort has been devoted to deep domain adaptation in computer vision and machine learning communities. However, most of existing work only concentrates on learning shared feature representation by minimizing the distribution discrepancy across different domains. Due to the fact that all the domain alignment approaches can only reduce, but not remove the domain shift, target domain samples distributed near the edge of the clusters, or far from their corresponding class centers are easily to be misclassified by the hyperplane learned from the source domain. To alleviate this issue, we propose to joint domain alignment and discriminative feature learning, which could benefit both domain alignment and final classification. Specifically, an instance-based discriminative feature learning method and a center-based discriminative feature learning method are proposed, both of which guarantee the domain invariant features with better intra-class compactness and inter-class separability. Extensive experiments show that learning the discriminative features in the shared feature space can significantly boost the performance of deep domain adaptation methods.", "title": "" }, { "docid": "cc61cf5de5445258a1dbb9a052821add", "text": "In healthcare systems, there is huge medical data collected from many medical tests which conducted in many domains. Much research has been done to generate knowledge from medical data by using data mining techniques. However, there still needs to extract hidden information in the medical data, which can help in detecting diseases in the early stage or even before happening. In this study, we apply three data mining classifiers; Decision Tree, Rule Induction, and Naïve Bayes, on a test blood dataset which has been collected from Europe Gaza Hospital, Gaza Strip. The classifiers utilize the CBC characteristics to predict information about possible blood diseases in early stage, which may enhance the curing ability. Three experiments are conducted on the test blood dataset, which contains three types of blood diseases; Hematology Adult, Hematology Children and Tumor. The results show that Naïve Bayes classifier has the ability to predict the Tumor of blood disease better than the other two classifiers with accuracy of 56%, Rule induction classifier gives better result in predicting Hematology (Adult, Children) with accuracy of (57%–67%) respectively, while Decision Tree has the Lowest accuracy rate for detecting the three types of diseases in our dataset.", "title": "" }, { "docid": "a559652585e2df510c1dd060cdf65ead", "text": "Experience replay is an important technique for addressing sample-inefficiency in deep reinforcement learning (RL), but faces difficulty in learning from binary and sparse rewards due to disproportionately few successful experiences in the replay buffer. Hindsight experience replay (HER) (Andrychowicz et al. 2017) was recently proposed to tackle this difficulty by manipulating unsuccessful transitions, but in doing so, HER introduces a significant bias in the replay buffer experiences and therefore achieves a suboptimal improvement in sample-efficiency. In this paper, we present an analysis on the source of bias in HER, and propose a simple and effective method to counter the bias, to most effectively harness the sample-efficiency provided by HER. Our method, motivated by counter-factual reasoning and called ARCHER, extends HER with a trade-off to make rewards calculated for hindsight experiences numerically greater than real rewards. We validate our algorithm on two continuous control environments from DeepMind Control Suite (Tassa et al. 2018) Reacher and Finger, which simulate manipulation tasks with a robotic arm in combination with various reward functions, task complexities and goal sampling strategies. Our experiments consistently demonstrate that countering bias using more aggressive hindsight rewards increases sample efficiency, thus establishing the greater benefit of ARCHER in RL applications with limited computing budget.", "title": "" }, { "docid": "378f33b14b499c65d75a0f83bda17438", "text": "We present the design of a soft wearable robotic device composed of elastomeric artificial muscle actuators and soft fabric sleeves, for active assistance of knee motions. A key feature of the device is the two-dimensional design of the elastomer muscles that not only allows the compactness of the device, but also significantly simplifies the manufacturing process. In addition, the fabric sleeves make the device lightweight and easily wearable. The elastomer muscles were characterized and demonstrated an initial contraction force of 38N and maximum contraction of 18mm with 104kPa input pressure, approximately. Four elastomer muscles were employed for assisted knee extension and flexion. The robotic device was tested on a 3D printed leg model with an articulated knee joint. Experiments were conducted to examine the relation between systematic change in air pressure and knee extension-flexion. The results showed maximum extension and flexion angles of 95° and 37°, respectively. However, these angles are highly dependent on underlying leg mechanics and positions. The device was also able to generate maximum extension and flexion forces of 3.5N and 7N, respectively.", "title": "" }, { "docid": "0a3bb33d5cff66346a967092202737ab", "text": "An Li-ion battery charger based on a charge-control buck regulator operating at 2.2 MHz is implemented in 180 nm CMOS technology. The novelty of the proposed charge-control converter consists of regulating the average output current by only sensing a portion of the inductor current and using an adaptive reference voltage. By adopting this approach, the charger average output current is set to a constant value of 900 mA regardless of the battery voltage variation. In constant-voltage (CV) mode, a feedback loop is established in addition to the preexisting current control loop, preserving the smoothness of the output voltage at the transition from constant-current (CC) to CV mode. A small-signal model has been developed to analyze the system stability and subharmonic oscillations at low current levels. Transistor-level simulations of the proposed switching charger are presented. The output voltage ranges from 2.1 to 4.2 V, and the power efficiency at 900 mA has been measured to be 86% for an input voltage of 10 V. The accuracy of the output current using the proposed sensing technique is 9.4% at 10 V.", "title": "" }, { "docid": "44f829c853c1cdd1cf2a0bd2622015bb", "text": "Alert is an extension architecture designed for transforming a passive SQL DBMS into. an active DBMS. The salient features of the design of Alert are reusing, to the extent possible, the passive DBMS technology, and making minimal changes to the language and implementation of the passive DBMS. Alert provides a layered architecture that allows the semantics of a variety of production rule languages to be supported on top. Rules may be specified on userdefined as well as built-in operations. Both synchronous and asynchronous event, monit,oring are possible. This paper presents the design of Alert and its implementation in the Starburst extensible DBMS.", "title": "" }, { "docid": "9b96e643f59b53b2a470eae61d9613b6", "text": "The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples 〈xtxt, xaud〉 by encouraging its output to reconstruct xaud. The synthesized audio waveform is expected to contain the verbal content of xtxt and the auditory style of xaud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce TTS-GAN, an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, TTS-GAN delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, TTS-GAN can generate human fidelity speech that satisfies the desired style conditions. TTS-GAN achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker’s voice).", "title": "" }, { "docid": "e3d0d40a685d5224084bf350dfb3b59b", "text": "This review analyzes the methods being used and developed in global environmental governance (GEG), an applied field that employs insights and tools from a variety of disciplines both to understand pressing environmental problems and to determine how to address them collectively. We find that methods are often underspecified in GEG research. We undertake a critical review of data collection and analysis in three categories: qualitative, quantitative, and modeling and scenario building. We include examples and references from recent studies to show when and how best to utilize these different methods to conduct problem-driven research. GEG problems are often characterized by institutional and issue complexity, linkages, and multiscalarity that pose challenges for many conventional methodological approaches. As a result, given the large methodological toolbox available to applied researchers, we recommend they adopt a reflective, pluralist, and often collaborative approach when choosing methods appropriate to these challenges. 441 A nn u. R ev . E nv ir on . R es ou rc . 2 01 3. 38 :4 41 -4 71 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by P on tif ic ia U ni ve rs id ad J av er ia na o n 12 /1 9/ 13 . F or p er so na l u se o nl y. EG38CH17-ONeill ARI 20 September 2013 14:27", "title": "" }, { "docid": "cd4a121221437f789a36075be41ae457", "text": "Providing good education is one of the major challenges for humanity. In many developing regions in the world improving educational standards is seen as a central building block for improving socio-economic situation of society. Based on our research in Panama we report on how mobile phones can be used as educational tools. In contrast to personal computers mobile phones are widely available and in Panama over 80% of the children have access to phones. We report on four different studies building on one another. We conducted surveys, focus groups, and group interviews with several hundred teachers and pupils to assess opportunities, needs, and threads for using phones in teaching and learning. Based on the feedback received we created a set of use cases and finally evaluated these in a field study in a rural multigrade school in Panama. Our findings suggest that current phones with multimedia capabilities provide a valuable resource for teaching and learning across many subjects. In particular recording of audio and video, programs for drawing, and taking photos were used in very creative and constructive ways beyond the use cases envisioned by us and initial skepticism of parents turned into support.", "title": "" }, { "docid": "6ce3156307df03190737ee7c0ae24c75", "text": "Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset.1 The results demonstrate that our approach outperforms all baselines on both tasks and datasets.", "title": "" }, { "docid": "ec3542685d1b6e71e523cdcafc59d849", "text": "The goal of subspace segmentation is to partition a set of data drawn from a union of subspace into their underlying subspaces. The performance of spectral clustering based approaches heavily depends on learned data affinity matrices, which are usually constructed either directly from the raw data or from their computed representations. In this paper, we propose a novel method to simultaneously learn the representations of data and the affinity matrix of representation in a unified optimization framework. A novel Augmented Lagrangian Multiplier based algorithm is designed to effectively and efficiently seek the optimal solution of the problem. The experimental results on both synthetic and real data demonstrate the efficacy of the proposed method and its superior performance over the state-of-the-art alternatives.", "title": "" }, { "docid": "265b352775956004436b438574ee2d91", "text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.", "title": "" }, { "docid": "69dce8bea305f4a0d6fabe7846d6ff22", "text": "This study aims to examine the satisfied and unsatisfied of hotel customers by utilizing a word cloud approach to evaluate online reviews. As a pilot test, online commends of 1,752 hotel guests were collected from TripAdvisor.com for 5 selected hotels in Chiang Mai, Thailand. The research results revealed some common features that are identified in both satisfied and dissatisfied of customer reviews; including staff service skills, hotel environment and facilities and a quality of room and bathroom. On the other hand, the findings shown that dissatisfied customers pointed out more frequently on the booking systems of the hotel. Therefore, this article's results suggests some clearer managerial implications pertaining to understanding of customer satisfaction level through the utilization of world cloud technique via review online platforms.", "title": "" }, { "docid": "e2908953ca9ec9d6097418ec0c701bf9", "text": "Recent years have seen significant advances on the creation of large-scale knowledge bases (KBs). Extracting knowledge from Web pages, and integrating it into a coherent KB is a task that spans the areas of natural language processing, information extraction, information integration, databases, search and machine learning. Some of the latest developments in the field were presented at the AKBC-WEKEX workshop on knowledge extraction at the NAACL-HLC 2012 conference. This workshop included 23 accepted papers, and 11 keynotes by senior researchers. The workshop had speakers from all major search engine providers, government institutions, and the leading universities in the field. In this survey, we summarize the papers, the keynotes, and the discussions at this workshop.", "title": "" }, { "docid": "b856143940b19888422c0c8bf5a3b441", "text": "Most statistical machine translation systems use phrase-to-phrase translations to capture local context information, leading to better lexical choice and more reliable local reordering. The quality of the phrase alignment is crucial to the quality of the resulting translations. Here, we propose a new phrase alignment method, not based on the Viterbi path of word alignment models. Phrase alignment is viewed as a sentence splitting task. For a given spitting of the source sentence (source phrase, left segment, right segment) find a splitting for the target sentence, which optimizes the overall sentence alignment probability. Experiments on different translation tasks show that this phrase alignment method leads to highly competitive translation results.", "title": "" }, { "docid": "7895810c92a80b6d5fd8b902241d66c9", "text": "This paper discusses a high-voltage pulse generator for producing corona plasma. The generator consists of three resonant charging circuits, a transmission line transformer, and a triggered spark-gap switch. Voltage pulses in the order of 30–100 kV with a rise time of 10–20 ns, a pulse duration of 100–200 ns, a pulse repetition rate of 1–900 pps, an energy per pulse of 0.5–12 J, and the average power of up to 10 kW have been achieved with total energy conversion efficiency of 80%–90%. Moreover, the system has been used in four industrial demonstrations on volatile organic compounds removal, odor emission control, and biogas conditioning.", "title": "" } ]
scidocsrr
d75d1cdb473873b2d4e8e2f13715c738
How Teachers Use Data to Help Students Learn: Contextual Inquiry for the Design of a Dashboard
[ { "docid": "2c8c8511e1391d300bfd4b0abd5ecea4", "text": "In 2009, we reported on a new Intelligent Tutoring Systems (ITS) technology, example-tracing tutors, that can be built without programming using the Cognitive Tutor Authoring Tools (CTAT). Creating example-tracing tutors was shown to be 4–8 times as cost-effective as estimates for ITS development from the literature. Since 2009, CTAT and its associated learning management system, the Tutorshop, have been extended and have been used for both research and real-world instruction. As evidence that example-tracing tutors are an effective and mature ITS paradigm, CTAT-built tutors have been used by approximately 44,000 students and account for 40 % of the data sets in DataShop, a large open repository for educational technology data sets. We review 18 example-tracing tutors built since 2009, which have been shown to be effective in helping students learn in real educational settings, often with large pre/post effect sizes. These tutors support a variety of pedagogical approaches, beyond step-based problem solving, including collaborative learning, educational games, and guided invention activities. CTAT and other ITS authoring tools illustrate that non-programmer approaches to building ITS are viable and useful and will likely play a key role in making ITS widespread.", "title": "" }, { "docid": "04d75786e12cabf5c849971ea4eb34c8", "text": "In this paper we present a learning analytics conceptual framework that supports enquiry-based evaluation of learning designs. The dimensions of the proposed framework emerged from a review of existing analytics tools, the analysis of interviews with teachers, and user scenarios to understand what types of analytics would be useful in evaluating a learning activity in relation to pedagogical intent. The proposed framework incorporates various types of analytics, with the teacher playing a key role in bringing context to the analysis and making decisions on the feedback provided to students as well as the scaffolding and adaptation of the learning design. The framework consists of five dimensions: temporal analytics, tool-specific analytics, cohort dynamics, comparative analytics and contingency. Specific metrics and visualisations are defined for each dimension of the conceptual framework. Finally the development of a tool that partially implements the conceptual framework is discussed.", "title": "" }, { "docid": "273153d0cf32162acb48ed989fa6d713", "text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
[ { "docid": "375470d901a7d37698d34747621667ce", "text": "RNA interference (RNAi) has recently emerged as a specific and efficient method to silence gene expression in mammalian cells either by transfection of short interfering RNAs (siRNAs; ref. 1) or, more recently, by transcription of short hairpin RNAs (shRNAs) from expression vectors and retroviruses. But the resistance of important cell types to transduction by these approaches, both in vitro and in vivo, has limited the use of RNAi. Here we describe a lentiviral system for delivery of shRNAs into cycling and non-cycling mammalian cells, stem cells, zygotes and their differentiated progeny. We show that lentivirus-delivered shRNAs are capable of specific, highly stable and functional silencing of gene expression in a variety of cell types and also in transgenic mice. Our lentiviral vectors should permit rapid and efficient analysis of gene function in primary human and animal cells and tissues and generation of animals that show reduced expression of specific genes. They may also provide new approaches for gene therapy.", "title": "" }, { "docid": "5e1f51b3d9b6ff91fbba6b7d155ecfaf", "text": "If a teleoperation scenario foresees complex and fine manipulation tasks a multi-fingered telemanipulation system is required. In this paper a multi-fingered telemanipulation system is presented, whereby the human hand controls a three-finger robotic gripper and force feedback is provided by using an exoskeleton. Since the human hand and robotic grippers have different kinematic structures, appropriate mappings for forces and positions are applied. A point-to-point position mapping algorithm as well as a simple force mapping algorithm are presented and evaluated in a real experimental setup.", "title": "" }, { "docid": "b325f262a6f84637c8a175c29f07db34", "text": "The aim of this article is to present a synthetic overview of the state of knowledge regarding the Celtic cultures in the northwestern Iberian Peninsula. It reviews the difficulties linked to the fact that linguists and archaeologists do not agree on this subject, and that the hegemonic view rejects the possibility that these populations can be considered Celtic. On the other hand, the examination of a range of direct sources of evidence, including literary and epigraphic texts, and the application of the method of historical anthropology to the available data, demonstrate the validity of the consideration of Celtic culture in this region, which can be described as a protohistorical society of the Late Iron Age, exhibiting a hierarchical organization based on ritually chosen chiefs whose power was based in part on economic redistribution of resources, together with a priestly elite more or less of the druidic type. However, the method applied cannot on its own answer the questions of when and how this Celtic cultural dimension of the proto-history of the northwestern Iberian Peninsula developed.", "title": "" }, { "docid": "94076bd2a4587df2bee9d09e81af2109", "text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.", "title": "" }, { "docid": "4e924d619325ca939955657db1280db1", "text": "This paper presents the dynamic modeling of a nonholonomic mobile robot and the dynamic stabilization problem. The dynamic model is based on the kinematic one including nonholonomic constraints. The proposed control strategy allows to solve the control problem using linear controllers and only requires the robot localization coordinates. This strategy was tested by simulation using Matlab-Simulink. Key-words: Mobile robot, kinematic and dynamic modeling, simulation, point stabilization problem.", "title": "" }, { "docid": "985e8fae88a81a2eec2ca9cc73740a0f", "text": "Negative symptoms account for much of the functional disability associated with schizophrenia and often persist despite pharmacological treatment. Cognitive behavioral therapy (CBT) is a promising adjunctive psychotherapy for negative symptoms. The treatment is based on a cognitive formulation in which negative symptoms arise and are maintained by dysfunctional beliefs that are a reaction to the neurocognitive impairment and discouraging life events frequently experienced by individuals with schizophrenia. This article outlines recent innovations in tailoring CBT for negative symptoms and functioning, including the use of a strong goal-oriented recovery approach, in-session exercises designed to disconfirm dysfunctional beliefs, and adaptations to circumvent neurocognitive and engagement difficulties. A case illustration is provided.", "title": "" }, { "docid": "21326db81a613fc84184c19408bc67ac", "text": "In the scenario where an underwater vehicle tracks an underwater target, reliable estimation of the target position is required.While USBL measurements provide target position measurements at low but regular update rate, multibeam sonar imagery gives high precision measurements but in a limited field of view. This paper describes the development of the tracking filter that fuses USBL and processed sonar image measurements for tracking underwater targets for the purpose of obtaining reliable tracking estimates at steady rate, even in cases when either sonar or USBL measurements are not available or are faulty. The proposed algorithms significantly increase safety in scenarios where underwater vehicle has to maneuver in close vicinity to human diver who emits air bubbles that can deteriorate tracking performance. In addition to the tracking filter development, special attention is devoted to adaptation of the region of interest within the sonar image by using tracking filter covariance transformation for the purpose of improving detection and avoiding false sonar measurements. Developed algorithms are tested on real experimental data obtained in field conditions. Statistical analysis shows superior performance of the proposed filter compared to conventional tracking using pure USBL or sonar measurements.", "title": "" }, { "docid": "8f9309ebfc87de5eb7cf715c0370da54", "text": "Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.", "title": "" }, { "docid": "cb47cc2effac1404dd60a91a099699d1", "text": "We survey recent trends in practical algorithms for balanced graph partitioning, point to applications and discuss future research directions.", "title": "" }, { "docid": "fb1b80f1e7109b382994ca61b993ad71", "text": "We present a novel approach to real-time dense visual SLAM. Our system is capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera in an incremental online fashion, without pose graph optimisation or any postprocessing steps. This is accomplished by using dense frame-tomodel camera tracking and windowed surfel-based fusion coupled with frequent model refinement through non-rigid surface deformations. Our approach applies local model-to-model surface loop closure optimisations as often as possible to stay close to the mode of the map distribution, while utilising global loop closure to recover from arbitrary drift and maintain global consistency.", "title": "" }, { "docid": "9dec1ac5acaef4ae9ddb5e65e4097773", "text": "We propose a novel fully convolutional network architecture for shapes, denoted by Shape Fully Convolutional Networks (SFCN). 3D shapes are represented as graph structures in the SFCN architecture, based on novel graph convolution and pooling operations, which are similar to convolution and pooling operations used on images. Meanwhile, to build our SFCN architecture in the original image segmentation fully convolutional network (FCN) architecture, we also design and implement a generating operation with bridging function. This ensures that the convolution and pooling operation we have designed can be successfully applied in the original FCN architecture. In this paper, we also present a new shape segmentation approach based on SFCN. Furthermore, we allow more general and challenging input, such as mixed datasets of different categories of shapes which can prove the ability of our generalisation. In our approach, SFCNs are trained triangles-to-triangles by using three low-level geometric features as input. Finally, the feature voting-based multi-label graph cuts is adopted to optimise the segmentation results obtained by SFCN prediction. The experiment results show that our method can effectively learn and predict mixed shape datasets of either similar or different characteristics, and achieve excellent segmentation results.", "title": "" }, { "docid": "9a6de540169834992134eb02927d889d", "text": "In this paper we argue why it is necessary to associate linguistic information with ontologies and why more expressive models, beyond RDFS, OWL and SKOS, are needed to capture the relation between natural language constructs on the one hand and ontological entities on the other. We argue that in the light of tasks such as ontology-based information extraction, ontology learning and population from text and natural language generation from ontologies, currently available datamodels are not sufficient as they only allow to associate atomic terms without linguistic grounding or structure to ontology elements. Towards realizing a more expressive model for associating linguistic information to ontology elements, we base our work presented here on previously developed models (LingInfo, LexOnto, LMF) and present a new joint model for linguistic grounding of ontologies called LexInfo. LexInfo combines essential design aspects of LingInfo and LexOnto and builds on a sound model for representing computational lexica called LMF which has been recently approved as a standard under ISO.", "title": "" }, { "docid": "8b550446a16158b7d3eefacd2d6396ff", "text": "We propose a theory of eigenvalues, eigenvectors, singular values, and singular vectors for tensors based on a constrained variational approach much like the Rayleigh quotient for symmetric matrix eigenvalues. These notions are particularly useful in generalizing certain areas where the spectral theory of matrices has traditionally played an important role. For illustration, we will discuss a multilinear generalization of the Perron-Frobenius theorem.", "title": "" }, { "docid": "513455013ecb2f4368566ba30cdb8d7f", "text": "Many modern multi-core processors sport a large shared cache with the primary goal of enhancing the statistic performance of computing workloads. However, due to resulting cache interference among tasks, the uncontrolled use of such a shared cache can significantly hamper the predictability and analyzability of multi-core real-time systems. Software cache partitioning has been considered as an attractive approach to address this issue because it does not require any hardware support beyond that available on many modern processors. However, the state-of-the-art software cache partitioning techniques face two challenges: (1) the memory co-partitioning problem, which results in page swapping or waste of memory, and (2) the availability of a limited number of cache partitions, which causes degraded performance. These are major impediments to the practical adoption of software cache partitioning. In this paper, we propose a practical OS-level cache management scheme for multi-core real-time systems. Our scheme provides predictable cache performance, addresses the aforementioned problems of existing software cache partitioning, and efficiently allocates cache partitions to schedule a given task set. We have implemented and evaluated our scheme in Linux/RK running on the Intel Core i7 quad-core processor. Experimental results indicate that, compared to the traditional approaches, our scheme is up to 39% more memory space efficient and consumes up to 25% less cache partitions while maintaining cache predictability. Our scheme also yields a significant utilization benefit that increases with the number of tasks.", "title": "" }, { "docid": "f8854602bbb2f5295a5fba82f22ca627", "text": "Models such as latent semantic analysis and those based on neural embeddings learn distributed representations of text, and match the query against the document in the latent semantic space. In traditional information retrieval models, on the other hand, terms have discrete or local representations, and the relevance of a document is determined by the exact matches of query terms in the body text. We hypothesize that matching with distributed representations complements matching with traditional local representations, and that a combination of the two is favourable. We propose a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that matches the query and the document using learned distributed representations. The two networks are jointly trained as part of a single neural network. We show that this combination or ‘duet’ performs significantly better than either neural network individually on a Web page ranking task, and significantly outperforms traditional baselines and other recently proposed models based on neural networks.", "title": "" }, { "docid": "10512cddabf509100205cb241f2f206a", "text": "Due to an increasing growth of Internet usage, cybercrimes has been increasing at an Alarming rate and has become most profitable criminal activity. Botnet is an emerging threat to the cyber security and existence of Command and Control Server(C&C Server) makes it very dangerous attack as compare to all other malware attacks. Botnet is a network of compromised machines which are remotely controlled by bot master to do various malicious activities with the help of command and control server and n-number of slave machines called bots. The main motive behind botnet is Identity theft, Denial of Service attack, Click fraud, Phishing and many other malware activities. Botnets rely on different protocols such as IRC, HTTP and P2P for transmission. Different botnet detection techniques have been proposed in recent years. This paper discusses Botnet, Botnet history, and life cycle of Botnet apart from classifying various Botnet detection techniques. Paper highlights the recent research work under botnets in cyber realm and proposes directions for future research in this area.", "title": "" }, { "docid": "0d1193978e4f8be0b78c6184d7ece3fe", "text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …", "title": "" }, { "docid": "165195f20110158a26bc62b74821dc46", "text": "Prior studies on knowledge contribution started with the motivating role of social capital to predict knowledge contribution but did not specifically examine how they can be built in the first place. Our research addresses this gap by highlighting the role technology plays in supporting the development of social capital and eventual knowledge sharing intention. Herein, we propose four technology-based social capital builders – identity profiling, sub-community building, feedback mechanism, and regulatory practice – and theorize that individuals’ use of these IT artifacts determine the formation of social capital, which in turn, motivate knowledge contribution in online communities. Data collected from 253 online community users provide support for the proposed structural model. The results show that use of IT artifacts facilitates the formation of social capital (network ties, shared language, identification, trust in online community, and norms of cooperation) and their effects on knowledge contribution operate indirectly through social capital.", "title": "" }, { "docid": "4958f4a85b531a2d5a846d1f6eb1a5a3", "text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.", "title": "" }, { "docid": "6d9f5f9e61c9b94febdd8e04cf999636", "text": "The Internet o€ers the hope of a more democratic society. By promoting a decentralized form of social mobilization, it is said, the Internet can help us to renovate our institutions and liberate ourselves from our authoritarian legacies. The Internet does indeed hold these possibilities, but they are hardly inevitable. In order for the Internet to become a tool for social progress, not a tool of oppression or another centralized broadcast medium or simply a waste of money, concerned citizens must understand the di€erent ways in which the Internet can become embedded in larger social processes. In thinking about culturally appropriate ways of using technologies like the Internet, the best starting-point is with peopleÐcoherent communities of people and the ways they think together. Let us consider an example. A photocopier company asked an anthropologist named Julian Orr to study its repair technicians and recommend the best ways to use technology in supporting their work. Orr (1996) took a broad view of the technicians' lives, learning some of their skills and following them around. Each morning the technicians would come to work, pick up their company vehicles, and drive to customers' premises where photocopiers needed ®xing; each evening they would return to the company, go to a bar together, and drink beer. Although the company had provided the technicians with formal training, Orr discovered that they actually acquired much of their expertise informally while drinking beer together. Having spent the day contending with dicult repair problems, they would entertain one another with ``war stories'', and these stories often helped them with future repairs. He suggested, therefore, that the technicians be given radio equipment so that they could remain in contact all day, telling stories and helping each other with their repair tasks. As Orr's (1996) story suggests, people think together best when they have something important in common. Networking technologies can often be used to create a Telematics and Informatics 15 (1998) 231±234", "title": "" } ]
scidocsrr
04a51b0a3185d7a7fccb38fe05df2787
Easy 4G/LTE IMSI Catchers for Non-Programmers
[ { "docid": "2a8f464e709dcae4e34f73654aefe31f", "text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.", "title": "" }, { "docid": "52dc8e0d8302bb40230202105307d2e1", "text": "LTE is currently being proposed for use in a nationwide wireless broadband public safety network in the United States as well as for other critical applications where reliable communication is essential for safety. Unfortunately, like any wireless technology, disruption of these networks is possible through radio jamming. This article investigates the extent to which LTE is vulnerable to RF jamming, spoofing, and sniffing, and assesses different physical layer threats that could affect next-generation critical communication networks. In addition, we examine how sniffing the LTE broadcast messages can aid an adversary in an attack. The weakest links of LTE are identified and used to establish an overall threat assessment. Lastly, we provide a survey of LTE jamming and spoofing mitigation techniques that have been proposed in the open literature.", "title": "" }, { "docid": "97d7281f14c9d9e745fe6f63044a7d91", "text": "The Long Term Evolution (LTE) is the latest mobile standard being implemented globally to provide connectivity and access to advanced services for personal mobile devices. Moreover, LTE networks are considered to be one of the main pillars for the deployment of Machine to Machine (M2M) communication systems and the spread of the Internet of Things (IoT). As an enabler for advanced communications services with a subscription count in the billions, security is of capital importance in LTE. Although legacy GSM (Global System for Mobile Communications) networks are known for being insecure and vulnerable to rogue base stations, LTE is assumed to guarantee confidentiality and strong authentication. However, LTE networks are vulnerable to security threats that tamper availability, privacy and authentication. This manuscript, which summarizes and expands the results presented by the author at ShmooCon 2016 [1], investigates the insecurity rationale behind LTE protocol exploits and LTE rogue base stations based on the analysis of real LTE radio link captures from the production network. Implementation results are discussed from the actual deployment of LTE rogue base stations, IMSI catchers and exploits that can potentially block a mobile device. A previously unknown technique to potentially track the location of mobile devices as they move from cell to cell is also discussed, with mitigations being proposed.", "title": "" } ]
[ { "docid": "f46136360aef128b54860caf50e8cc77", "text": "We propose an FPGA chip architecture based on a conventional FPGA logic array core, in which I/O pins are clocked at a much higher rate than that of the logic array that they serve. Wide data paths within the chip are time multiplexed at the edge of the chip into much faster and narrower data paths that run offchip. This kind of arrangement makes it possible to interface a relatively slow FPGA core with high speed memories and data streams, and is useful for many pin-limited FPGA applications. For efficient use of the highest bandwidth DRAM’s, our proposed chip includes a RAMBUS DRAM interface, a burst-transfer controller, and burst buffers. This proposal is motivated by our work with virtual processor cellular automata (CA) machines—a kind of SIMD computer. Our next generation of CA machines requires reconfigurable FPGA-like processors coupled to the highest speed DRAM’s and SRAM’s available. Unfortunately, no current FPGA chips have appropriate DRAM I/O support or the speed needed to easily interface with pipelined SRAM’s. The chips proposed here would make a wide range of large-scale CA simulations of 3D physical systems practical and economical—simulations that are currently well beyond the reach of any existing computer. These chips would also be well suited to a broad range of other simulation, graphics, and DSP-like applications.", "title": "" }, { "docid": "56642ffad112346186a5c3f12133e59b", "text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.", "title": "" }, { "docid": "97075bfa0524ad6251cefb2337814f32", "text": "Reverberation distorts human speech and usually has negative effects on speech intelligibility, especially for hearing-impaired listeners. It also causes performance degradation in automatic speech recognition and speaker identification systems. Therefore, the dereverberation problem must be dealt with in daily listening environments. We propose to use deep neural networks (DNNs) to learn a spectral mapping from the reverberant speech to the anechoic speech. The trained DNN produces the estimated spectral representation of the corresponding anechoic speech. We demonstrate that distortion caused by reverberation is substantially attenuated by the DNN whose outputs can be resynthesized to the dereverebrated speech signal. The proposed approach is simple, and our systematic evaluation shows promising dereverberation results, which are significantly better than those of related systems.", "title": "" }, { "docid": "4dc05debbbe6c8103d772d634f91c86c", "text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.", "title": "" }, { "docid": "0a648f94b608b57827c8d6ce097037b1", "text": "The emergence of PV inverter and Electric Vehicles (EVs) has created an increased demand for high power densities and high efficiency in power converters. Silicon carbide (SiC) is the candidate of choice to meet this demand, and it has, therefore, been the object of a growing interest over the past decade. The Boost converter is an essential part in most PV inverters and EVs. This paper presents a new generation of 1200V 20A SiC true MOSFET used in a 10KW hard-switching interleaved Boost converter with high switching frequency up to 100KHZ. It compares thermal and efficiency with Silicon high speed H3 IGBT. In both cases, results show a clear advantage for this new generation SiC MOSFET. Keywords—Silicon Cardbide; MOSFET; Interleaved; Hard Switching; Boost converter; IGBT", "title": "" }, { "docid": "521699fc8fc841e8ac21be51370b439f", "text": "Scene understanding is an essential technique in semantic segmentation. Although there exist several datasets that can be used for semantic segmentation, they are mainly focused on semantic image segmentation with large deep neural networks. Therefore, these networks are not useful for real time applications, especially in autonomous driving systems. In order to solve this problem, we make two contributions to semantic segmentation task. The first contribution is that we introduce the semantic video dataset, the Highway Driving dataset, which is a densely annotated benchmark for a semantic video segmentation task. The Highway Driving dataset consists of 20 video sequences having a 30Hz frame rate, and every frame is densely annotated. Secondly, we propose a baseline algorithm that utilizes a temporal correlation. Together with our attempt to analyze the temporal correlation, we expect the Highway Driving dataset to encourage research on semantic video segmentation.", "title": "" }, { "docid": "d6cf367f29ed1c58fb8fd0b7edf69458", "text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.", "title": "" }, { "docid": "524ecdd2bfeb26f193f3121253cc5ca4", "text": "The use of massive multiple-input multiple-output (MIMO) techniques for communication at millimeter-Wave (mmW) frequency bands has become a key enabler to meet the data rate demands of the upcoming fifth generation (5G) cellular systems. In particular, analog and hybrid beamforming solutions are receiving increasing attention as less expensive and more power efficient alternatives to fully digital precoding schemes. Despite their proven good performance in simple setups, their suitability for realistic cellular systems with many interfering base stations and users is still unclear. Furthermore, the performance of massive MIMO beamforming and precoding methods are in practice also affected by practical limitations and hardware constraints. In this sense, this paper assesses the performance of digital precoding and analog beamforming in an urban cellular system with an accurate mmW channel model under both ideal and realistic assumptions. The results show that analog beamforming can reach the performance of fully digital maximum ratio transmission under line of sight conditions and with a sufficient number of parallel radio-frequency (RF) chains, especially when the practical limitations of outdated channel information and per antenna power constraints are considered. This work also shows the impact of the phase shifter errors and combiner losses introduced by real phase shifter and combiner implementations over analog beamforming, where the former ones have minor impact on the performance, while the latter ones determine the optimum number of RF chains to be used in practice.", "title": "" }, { "docid": "c6cfc50062e42f51c9ac0db3b4faed83", "text": "We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model. Of independent interest, we present the notion of a committed proof.", "title": "" }, { "docid": "8dae37ecc2e1bdb6bc8a625b565ea7e8", "text": "Friendships are essential for adolescent social development. However, they may be pursued for varying motives, which, in turn, may predict similarity in friendships via social selection or social influence processes, and likely help to explain friendship quality. We examined the effect of early adolescents' (N = 374, 12-14 years) intrinsic and extrinsic friendship motivation on friendship selection and social influence by utilizing social network modeling. In addition, longitudinal relations among motivation and friendship quality were estimated with structural equation modeling. Extrinsic motivation predicted activity in making friendship nominations during the sixth grade and lower friendship quality across time. Intrinsic motivation predicted inactivity in making friendship nominations during the sixth, popularity as a friend across the transition to middle school, and higher friendship quality across time. Social influence effects were observed for both motives, but were more pronounced for intrinsic motivation.", "title": "" }, { "docid": "cb59c880b3848b7518264f305cfea32a", "text": "Leakage current reduction is crucial for the transformerless photovoltaic inverters. The conventional three-phase current source H6 inverter suffers from the large leakage current, which restricts its application to transformerless PV systems. In order to overcome the limitations, a new three-phase current source H7 (CH7) inverter is proposed in this paper. Only one additional Insulated Gate Bipolar Transistor is needed, but the leakage current can be effectively suppressed with a new space vector modulation (SVM). Finally, the experimental tests are carried out on the proposed CH7 inverter, and the experimental results verify the effectiveness of the proposed topology and SVM method.", "title": "" }, { "docid": "be17532b93e28edb4f73462cfe17f96d", "text": "OBJECTIVES\nThe purpose of this study was to conduct a review of randomized controlled trials (RCTs) to determine the treatment effectiveness of the combination of manual therapy (MT) with other physical therapy techniques.\n\n\nMETHODS\nSystematic searches of scientific literature were undertaken on PubMed and the Cochrane Library (2004-2014). The following terms were used: \"patellofemoral pain syndrome,\" \"physical therapy,\" \"manual therapy,\" and \"manipulation.\" RCTs that studied adults diagnosed with patellofemoral pain syndrome (PFPS) treated by MT and physical therapy approaches were included. The quality of the studies was assessed by the Jadad Scale.\n\n\nRESULTS\nFive RCTs with an acceptable methodological quality (Jadad ≥ 3) were selected. The studies indicated that MT combined with physical therapy has some effect on reducing pain and improving function in PFPS, especially when applied on the full kinetic chain and when strengthening hip and knee muscles.\n\n\nCONCLUSIONS\nThe different combinations of MT and physical therapy programs analyzed in this review suggest that giving more emphasis to proximal stabilization and full kinetic chain treatments in PFPS will help better alleviation of symptoms.", "title": "" }, { "docid": "b5dd56652cfa2ff8cac6159ff8563213", "text": "For decades, optimization has played a central role in addressing wireless resource management problems such as power control and beamformer design. However, these algorithms often require a considerable number of iterations for convergence, which poses challenges for real-time processing. In this work, we propose a new learning-based approach for wireless resource management. The key idea is to treat the input and output of a resource allocation algorithm as an unknown non-linear mapping and to use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately and effectively by a DNN of moderate size, then such DNN can be used for resource allocation in almost real time, since passing the input through a DNN to get the output only requires a small number of simple operations. In this work, we first characterize a class of ‘learnable algorithms’ and then design DNNs to approximate some algorithms of interest in wireless communications. We use extensive numerical simulations to demonstrate the superior ability of DNNs for approximating two considerably complex algorithms that are designed for power allocation in wireless transmit signal design, while giving orders of magnitude speedup in computational time.", "title": "" }, { "docid": "7681e7fa005b0d101b122a757ad45452", "text": "Recent studies have demonstrated an increase in the necessity of adaptive planning over the course of lung cancer radiation therapy (RT) treatment. In this study, we evaluated intrathoracic changes detected by cone-beam CT (CBCT) in lung cancer patients during RT. A total of 71 lung cancer patients treated with fractionated CBCT-guided RT were evaluated. Intrathoracic changes and plan adaptation priority (AP) scores were compared between small cell lung cancer (SCLC, n = 13) and non-small cell lung cancer (NSCLC, n = 58) patients. The median cumulative radiation dose administered was 54 Gy (range 30–72 Gy) and the median fraction dose was 1.8 Gy (range 1.8–3.0 Gy). All patients were subjected to a CBCT scan at least weekly (range 1–5/week). We observed intrathoracic changes in 83 % of the patients over the course of RT [58 % (41/71) regression, 17 % (12/71) progression, 20 % (14/71) atelectasis, 25 % (18/71) pleural effusion, 13 % (9/71) infiltrative changes, and 10 % (7/71) anatomical shift]. Nearly half, 45 % (32/71), of the patients had one intrathoracic soft tissue change, 22.5 % (16/71) had two, and three or more changes were observed in 15.5 % (11/71) of the patients. Plan modifications were performed in 60 % (43/71) of the patients. Visual volume reduction did correlate with the number of CBCT scans acquired (r = 0.313, p = 0.046) and with the timing of chemotherapy administration (r = 0.385, p = 0.013). Weekly CBCT monitoring provides an adaptation advantage in patients with lung cancer. In this study, the monitoring allowed for plan adaptations due to tumor volume changes and to other anatomical changes. Neuere Studien haben eine zunehmende Notwendigkeit der adaptiven Bestrahlungsplanung im Verlauf der Bestrahlungsserie bei Patienten mit Lungenkrebs nachgewiesen. In der vorliegenden Studie haben wir intrathorakale Änderungen mittels Cone-beam-CT (CBCT) bei Lungenkrebspatienten während der Radiotherapie (RT) analysiert. Analysiert wurden 71 Patienten, die eine fraktionierte CBCT-basierte RT bei Lungenkrebs erhalten haben. Intrathorakale Veränderungen und Priorität-Scores für die adaptive Plananpassung (AP) wurden zwischen kleinzelligem (SCLC: 13 Patienten) und nicht-kleinzelligem Bronchialkarzinom (NSCLC: 58 Patienten) verglichen. Die mediane kumulative Strahlendosis betrug 54 Gy (Spanne 30–72 Gy), die mediane Einzeldosis 1,8 Gy (Spanne 1,8–3,0 Gy). Alle Patienten wurden mit einem CBCT-Scan mindestens einmal wöchentlich (Spanne 1–5/Woche) untersucht. Wir beobachteten intrathorakale Änderungen in 83% der Patienten im Verlauf der RT [58 % (41/71) Regression, 17 % (12/71) Progression, 20 % (14/71) Atelektase, 25 % (18/71) Pleuraerguss, 13 % (9/71) infiltrative Veränderungen und 10 % (7/71) anatomische Verschiebung des Tumors]. Fast die Hälfte der Patienten hatte eine intrathorakale Weichgewebeveränderung (45 %, 32/71) 22,5 % (16/71) hatten zwei. Drei oder mehr Veränderungen wurden in 15,5 % (11/71) der Patienten beobachtet. Planmodifikationen wurden in 60 % (43/71) der Patienten durchgeführt. Die visuelle Volumenreduktion korrelierte mit der Anzahl der erworbenen CBCT-Scans (r = 0,313; p = 0,046) als auch mit dem Zeitpunkt der Verabreichung der Chemotherapie (r = 0,385; p = 0,013). Das wöchentliche CBCT-Monitoring bietet einen Adaptationsvorteil bei Patienten mit Lungenkrebs. In dieser Studie hat das Monitoring die adaptiven Plananpassungen auf Basis der Tumorvolumenveränderungen sowie der anderen intrathorakalen anatomischen Veränderungen ermöglicht.", "title": "" }, { "docid": "dc776c4fdf073db69633cc4e2e43de09", "text": "A security API is an Application Program Interface that allows untrusted code to access sensitive resources in a secure way. Examples of security APIs include the interface between the tamper-resistant chip on a smartcard (trusted) and the card reader (untrusted), the interface between a cryptographic Hardware Security Module, or HSM (trusted) and the client machine (untrusted), and the Google maps API (an interface between a server, trusted by Google, and the rest of the Internet). The crucial aspect of a security API is that it is designed to enforce a policy, i.e. no matter what sequence of commands in the interface are called, and no matter what the parameters, certain security properties should continue to hold. This means that if the less trusted code turns out to be malicious (or just faulty), the carefully designed API should prevent compromise of critical data. Designing such an interface is extremely tricky even for experts. A number of security flaws have been found in APIs in use in deployed systems in the last decade. In this tutorial paper, we introduce the subject of security API analysis using formal techniques. This approach has recently proved highly successful both in finding new flaws and verifying security properties of improved designs. We will introduce the main techniques, many of which have been adapted from language-based security and security protocol verification, by means of two case studies: cryptographic key management, and Personal Identification Number (PIN) processing in the cash machine network. We will give plenty of examples of API attacks, and highlight the areas where more research is needed.", "title": "" }, { "docid": "009543f9b54e116f379c95fe255e7e03", "text": "With technology migration into nano and molecular scales several hybrid CMOS/nano logic and memory architectures have been proposed that aim to achieve high device density with low power consumption. The discovery of the memristor has further enabled the realization of denser nanoscale logic and memory systems by facilitating the implementation of multilevel logic. This work describes the design of such a multilevel nonvolatile memristor memory system, and the design constraints imposed in the realization of such a memory. In particular, the limitations on load, bank size, number of bits achievable per device, placed by the required noise margin for accurately reading and writing the data stored in a device are analyzed. Also analyzed are the nondisruptive read and write methodologies for the hybrid multilevel memristor memory to program and read the memristive information without corrupting it. This work showcases two write methodologies that leverage the best traits of memristors when used in either linear (low power) or nonlinear drift (fast speeds) modes. The system can therefore be tailored depending on the required performance parameters of a given application for a fast memory or a slower but very energy-efficient system. We propose for the first time, a hybrid memory that aims to incorporate the area advantage provided by the utilization of multilevel logic and nanoscale memristive devices in conjunction with CMOS for the realization of a high density nonvolatile multilevel memory.", "title": "" }, { "docid": "0be92a74f0ff384c66ef88dd323b3092", "text": "When facing uncertainty, adaptive behavioral strategies demand that the brain performs probabilistic computations. In this probabilistic framework, the notion of certainty and confidence would appear to be closely related, so much so that it is tempting to conclude that these two concepts are one and the same. We argue that there are computational reasons to distinguish between these two concepts. Specifically, we propose that confidence should be defined as the probability that a decision or a proposition, overt or covert, is correct given the evidence, a critical quantity in complex sequential decisions. We suggest that the term certainty should be reserved to refer to the encoding of all other probability distributions over sensory and cognitive variables. We also discuss strategies for studying the neural codes for confidence and certainty and argue that clear definitions of neural codes are essential to understanding the relative contributions of various cortical areas to decision making.", "title": "" }, { "docid": "a53a81b0775992ea95db85b045463ddf", "text": "We start by asking an interesting yet challenging question, “If a large proportion (e.g., more than 90% as shown in Fig. 1) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated?” Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (rBTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.", "title": "" }, { "docid": "ad538b97c24c2c812b123be92e0c5d19", "text": "Interstitial deletions affecting the long arm of chromosome 3 have been associated with a broad phenotype. This has included the features of blepharophimosis-ptosis-epicanthus inversus syndrome, Dandy-Walker malformation, and the rare Wisconsin syndrome. The authors report a young female patient presenting with features consistent with all 3 of these syndromes. This has occurred in the context of a de novo 3q22.3q24 microdeletion including FOXL2, ZIC1, and ZIC4. This patient provides further evidence for the role of ZIC1 and ZIC4 in Dandy-Walker malformation and is the third reported case of Dandy-Walker malformation to have associated corpus callosum thinning. This patient is also only the seventh to be reported with the rare Wisconsin syndrome phenotype.", "title": "" }, { "docid": "d1072bc9960fc3697416c9d982ed5a9c", "text": "We compared face identification by humans and machines using images taken under a variety of uncontrolled illumination conditions in both indoor and outdoor settings. Natural variations in a person's day-to-day appearance (e.g., hair style, facial expression, hats, glasses, etc.) contributed to the difficulty of the task. Both humans and machines matched the identity of people (same or different) in pairs of frontal view face images. The degree of difficulty introduced by photometric and appearance-based variability was estimated using a face recognition algorithm created by fusing three top-performing algorithms from a recent international competition. The algorithm computed similarity scores for a constant set of same-identity and different-identity pairings from multiple images. Image pairs were assigned to good, moderate, and poor accuracy groups by ranking the similarity scores for each identity pairing, and dividing these rankings into three strata. This procedure isolated the role of photometric variables from the effects of the distinctiveness of particular identities. Algorithm performance for these constant identity pairings varied dramatically across the groups. In a series of experiments, humans matched image pairs from the good, moderate, and poor conditions, rating the likelihood that the images were of the same person (1: sure same - 5: sure different). Algorithms were more accurate than humans in the good and moderate conditions, but were comparable to humans in the poor accuracy condition. To date, these are the most variable illumination- and appearance-based recognition conditions on which humans and machines have been compared. The finding that machines were never less accurate than humans on these challenging frontal images suggests that face recognition systems may be ready for applications with comparable difficulty. We speculate that the superiority of algorithms over humans in the less challenging conditions may be due to the algorithms' use of detailed, view-specific identity information. Humans may consider this information less important due to its limited potential for robust generalization in suboptimal viewing conditions.", "title": "" } ]
scidocsrr
0e078e18998edeacaff6a61369a98571
Cyberbullying or Cyber Aggression ? : A Review of Existing Definitions of Cyber-Based Peer-to-Peer Aggression
[ { "docid": "64bdb5647b7b05c96de8c0d8f6f00eed", "text": "Cyberbullying is a reality of the digital age. To address this phenomenon, it becomes imperative to understand exactly what cyberbullying is. Thus, establishing a workable and theoretically sound definition is essential. This article contributes to the existing literature in relation to the definition of cyberbullying. The specific elements of repetition, power imbalance, intention, and aggression, regarded as essential criteria of traditional face-to-face bullying, are considered in the cyber context. It is posited that the core bullying elements retain their importance and applicability in relation to cyberbullying. The element of repetition is in need of redefining, given the public nature of material in the online environment. In this article, a clear distinction between direct and indirect cyberbullying is made and a model definition of cyberbullying is offered. Overall, the analysis provided lends insight into how the essential bullying elements have evolved and should apply in our parallel cyber universe.", "title": "" }, { "docid": "117f529b96afc67e1a9ba3058f83049f", "text": "Data from 53 focus groups, which involved students from 10 to 18 years old, show that youngsters often interpret \"cyberbullying\" as \"Internet bullying\" and associate the phenomenon with a wide range of practices. In order to be considered \"true\" cyberbullying, these practices must meet several criteria. They should be intended to hurt (by the perpetrator) and perceived as hurtful (by the victim); be part of a repetitive pattern of negative offline or online actions; and be performed in a relationship characterized by a power imbalance (based on \"real-life\" power criteria, such as physical strength or age, and/or on ICT-related criteria such as technological know-how and anonymity).", "title": "" }, { "docid": "056944e9e568d69d5caa707d03353f62", "text": "Cyberbullying has emerged as a new form of antisocial behaviour in the context of online communication over the last decade. The present study investigates potential longitudinal risk factors for cyberbullying. A total of 835 Swiss seventh graders participated in a short-term longitudinal study (two assessments 6 months apart). Students reported on the frequency of cyberbullying, traditional bullying, rule-breaking behaviour, cybervictirnisation, traditional victirnisation, and frequency of online communication (interpersonal characteristics). In addition, we assessed moral disengagement, empathic concern, and global self-esteem (intrapersonal characteristics). Results showed that traditional bullying, rule-breaking behaviour, and frequency of online communication are longitudinal risk factors for involvement in cyberbullying as a bully. Thus, cyberbullying is strongly linked to real-world antisocial behaviours. Frequent online communication may be seen as an exposure factor that increases the likelihood of engaging in cyberbullying. In contrast, experiences of victimisation and intrapersonal characteristics were not found to increase the longitudinal risk for cyberbullying over and above antisocial behaviour and frequency of online communication. Implications of the findings for the prevention of cyberbullying are discussed. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "fd8f5dc4264464cd8f978872d58aaf19", "text": "OBJECTIVES\nTo determine the capacity of black soldier fly larvae (BSFL) (Hermetia illucens) to convert fresh human faeces into larval biomass under different feeding regimes, and to determine how effective BSFL are as a means of human faecal waste management.\n\n\nMETHODS\nBlack soldier fly larvae were fed fresh human faeces. The frequency of feeding, number of larvae and feeding ratio were altered to determine their effects on larval growth, prepupal weight, waste reduction, bioconversion and feed conversion rate (FCR).\n\n\nRESULTS\nThe larvae that were fed a single lump amount of faeces developed into significantly larger larvae and prepupae than those fed incrementally every 2 days; however, the development into pre-pupae took longer. The highest waste reduction was found in the group containing the most larvae, with no difference between feeding regimes. At an estimated 90% pupation rate, the highest bioconversion (16-22%) and lowest, most efficient FCR (2.0-3.3) occurred in groups that contained 10 and 100 larvae, when fed both the lump amount and incremental regime.\n\n\nCONCLUSION\nThe prepupal weight, bioconversion and FCR results surpass those from previous studies into BSFL management of swine, chicken manure and municipal organic waste. This suggests that the use of BSFL could provide a solution to the health problems associated with poor sanitation and inadequate human waste management in developing countries.", "title": "" }, { "docid": "59e3e0099e215000b34e32d90b0bd650", "text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.", "title": "" }, { "docid": "35de54ee9d3d4c117cf4c1d8fc4f4e87", "text": "On the purpose of managing process models to make them more practical and effective in enterprises, a construction of BPMN-based Business Process Model Base is proposed. Considering Business Process Modeling Notation (BPMN) is used as a standard of process modeling, based on BPMN, the process model transformation is given, and business blueprint modularization management methodology is used for process management. Therefore, BPMN-based Business Process Model Base provides a solution of business process modeling standardization, management and execution so as to enhance the business process reuse.", "title": "" }, { "docid": "0add9f22db24859da50e1a64d14017b9", "text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.", "title": "" }, { "docid": "bd07c789a76efd51cc78f9828d045329", "text": "BACKGROUND\nProphylaxis for venous thromboembolism is recommended for at least 10 days after total knee arthroplasty; oral regimens could enable shorter hospital stays. We aimed to test the efficacy and safety of oral rivaroxaban for the prevention of venous thromboembolism after total knee arthroplasty.\n\n\nMETHODS\nIn a randomised, double-blind, phase III study, 3148 patients undergoing knee arthroplasty received either oral rivaroxaban 10 mg once daily, beginning 6-8 h after surgery, or subcutaneous enoxaparin 30 mg every 12 h, starting 12-24 h after surgery. Patients had mandatory bilateral venography between days 11 and 15. The primary efficacy outcome was the composite of any deep-vein thrombosis, non-fatal pulmonary embolism, or death from any cause up to day 17 after surgery. Efficacy was assessed as non-inferiority of rivaroxaban compared with enoxaparin in the per-protocol population (absolute non-inferiority limit -4%); if non-inferiority was shown, we assessed whether rivaroxaban had superior efficacy in the modified intention-to-treat population. The primary safety outcome was major bleeding. This trial is registered with ClinicalTrials.gov, number NCT00362232.\n\n\nFINDINGS\nThe primary efficacy outcome occurred in 67 (6.9%) of 965 patients given rivaroxaban and in 97 (10.1%) of 959 given enoxaparin (absolute risk reduction 3.19%, 95% CI 0.71-5.67; p=0.0118). Ten (0.7%) of 1526 patients given rivaroxaban and four (0.3%) of 1508 given enoxaparin had major bleeding (p=0.1096).\n\n\nINTERPRETATION\nOral rivaroxaban 10 mg once daily for 10-14 days was significantly superior to subcutaneous enoxaparin 30 mg given every 12 h for the prevention of venous thromboembolism after total knee arthroplasty.\n\n\nFUNDING\nBayer Schering Pharma AG, Johnson & Johnson Pharmaceutical Research & Development.", "title": "" }, { "docid": "db8f1de1961f4730e6fc40881f4d0641", "text": "Non-thrombotic pulmonary embolism has recently been reported as a remote complication of filler injections to correct hollowing in the temporal region. The middle temporal vein (MTV) has been identified as being highly susceptible to accidental injection. The anatomy and tributaries of the MTV were investigated in six soft embalmed cadavers. The MTV was cannulated and injected in both anterograde and retrograde directions in ten additional cadavers using saline and black filler, respectively. The course and tributaries of the MTV were described. Regarding the infusion experiment, manual injection of saline was easily infused into the MTV toward the internal jugular vein, resulting in continuous flow of saline drainage. This revealed a direct channel from the MTV to the internal jugular vein. Assessment of a preventive maneuver during filler injections was effectively performed by pressing at the preauricular venous confluent point against the zygomatic process. Sudden retardation of saline flow from the drainage tube situated in the internal jugular vein was observed when the preauricular confluent point was compressed. Injection of black gel filler into the MTV and the tributaries through the cannulated tube directed toward the eye proved difficult. The mechanism of venous filler emboli in a clinical setting occurs when the MTV is accidentally cannulated. The filler emboli follow the anterograde venous blood stream to the pulmonary artery causing non-thrombotic pulmonary embolism. Pressing of the pretragal confluent point is strongly recommended during temporal injection to help prevent filler complications, but does not totally eliminate complication occurrence. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .", "title": "" }, { "docid": "b3e183d0e260ff14d82d8c5f65aa808a", "text": "Ulnar nerve entrapment across the elbow (UAE), a common entrapment, requires neurophysiological evaluation for a diagnosis, but a standardized neurophysiological classification is not available. The aim of our study was to evaluate the validity of a neurophysiological classification of UAE, developed by us. To this end, we examined whether sensorimotor deficits, as observed by the physician and as referred by the patients, increased with the neurophysiological severity according to the classification. We performed a multiperspective assessment of 63 consecutive arms from 52 patients with a clinical diagnosis of UAE. Neurophysiological, clinical and patient-oriented validated measurements were used. The neurophysiological classification is based on the presence or absence of evoked responses and on the normality or abnormality of conduction findings. A strict relationship was observed between the degree of neurophysiological severity and the clinical findings (sensorimotor deficits). Moreover, a significant positive correlation between hand functional deficit and neurophysiological classification was observed. Conversely, a clear correlation between neurophysiological pattern and symptoms was not found. The neurophysiological classification is easy to use and reliable, but further multicentric studies should be performed.", "title": "" }, { "docid": "07c8719c4b8be9e02d14cd24c6e4e05c", "text": "Sentiment and emotional analysis on online collaborative software development forums can be very useful to gain important insights into the behaviors and personalities of the developers. Such information can later on be used to increase productivity of developers by making recommendations on how to behave best in order to get a task accomplished. However, due to the highly technical nature of the data present in online collaborative software development forums, mining sentiments and emotions becomes a very challenging task. In this work we present a new approach for mining sentiments and emotions from software development datasets using Interaction Process Analysis(IPA) labels and machine learning. We also apply distance metric learning as a preprocessing step before training a feed forward neural network and report the precision, recall, F1 and accuracy.", "title": "" }, { "docid": "bae3d6ffee5380ea6352b8b384667d76", "text": "A flexible transparent modify dipole antenna printed on PET film is presented in this paper. The proposed antenna was designed to operate at 2.4GHz for ISM applications. The impedance characteristic and the radiation characteristic were simulated and measured. The proposed antenna has good performance. It can be easily mounted on conformal shape, because it is fabricated on PET film having the flexible characteristic.", "title": "" }, { "docid": "8519ab2692f07cc4d7fa8443591c4729", "text": "We discuss methodology for multidimensional scaling (MDS) and its implementation in two software systems, GGvis and XGvis. MDS is a visualization technique for proximity data, that is, data in the form of N × N dissimilarity matrices. MDS constructs maps (“configurations,” “embeddings”) in IRk by interpreting the dissimilarities as distances. Two frequent sources of dissimilarities are high-dimensional data and graphs. When the dissimilarities are distances between high-dimensional objects, MDS acts as a (often nonlinear) dimension-reduction technique. When the dissimilarities are shortest-path distances in a graph, MDS acts as a graph layout technique. MDS has found recent attention in machine learning motivated by image databases (“Isomap”). MDS is also of interest in view of the popularity of “kernelizing” approaches inspired by Support Vector Machines (SVMs; “kernel PCA”). This article discusses the following general topics: (1) the stability and multiplicity of MDS solutions; (2) the analysis of structure within and between subsets of objects with missing value schemes in dissimilarity matrices; (3) gradient descent for optimizing general MDS loss functions (“Strain” and “Stress”); (4) a unification of classical (Strain-based) and distance (Stress-based) MDS. Particular topics include the following: (1) blending of automatic optimization with interactive displacement of configuration points to assist in the search for global optima; (2) forming groups of objects with interactive brushing to create patterned missing values in MDS loss functions; (3) optimizing MDS loss functions for large numbers of objects relative to a small set of anchor points (“external unfolding”); and (4) a nonmetric version of classical MDS.", "title": "" }, { "docid": "8eb161e363d55631148ed3478496bbd5", "text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole", "title": "" }, { "docid": "de1db4e54fb686f2b597936aa551cd14", "text": "Trustworthy software requires strong privacy and security guarantees from a secure trust base in hardware. While chipmakers provide hardware support for basic security and privacy primitives such as enclaves and memory encryption. these primitives do not address hiding of the memory access pattern, information about which may enable attacks on the system or reveal characteristics of sensitive user data. State-of-the-art approaches to protecting the access pattern are largely based on Oblivious RAM (ORAM). Unfortunately, current ORAM implementations suffer from very significant practicality and overhead concerns, including roughly an order of magnitude slowdown, more than 100% memory capacity overheads, and the potential for system deadlock.\n Memory technology trends are moving towards 3D and 2.5D integration, enabling significant logic capabilities and sophisticated memory interfaces. Leveraging the trends, we propose a new approach to access pattern obfuscation, called ObfusMem. ObfusMem adds the memory to the trusted computing base and incorporates cryptographic engines within the memory. ObfusMem encrypts commands and addresses on the memory bus, hence the access pattern is cryptographically obfuscated from external observers. Our evaluation shows that ObfusMem incurs an overhead of 10.9% on average, which is about an order of magnitude faster than ORAM implementations. Furthermore, ObfusMem does not incur capacity overheads and does not amplify writes. We analyze and compare the security protections provided by ObfusMem and ORAM, and highlight their differences.", "title": "" }, { "docid": "e4c23ebf305f9f1a3e3d016b6f22e683", "text": "Accurate detection of the human metaphase chromosome centromere is a critical element of cytogenetic diagnostic techniques, including chromosome enumeration, karyotyping and radiation biodosimetry. Existing centromere detection methods tends to perform poorly in the presence of irregular boundaries, shape variations and premature sister chromatid separation. We present a centromere detection algorithm that uses a novel contour partitioning technique to generate centromere candidates followed by a machine learning approach to select the best candidate that enhances the detection accuracy. The contour partitioning technique evaluates various combinations of salient points along the chromosome boundary using a novel feature set and is able to identify telomere regions as well as detect and correct for sister chromatid separation. This partitioning is used to generate a set of centromere candidates which are then evaluated based on a second set of proposed features. The proposed algorithm outperforms previously published algorithms and is shown to do so with a larger set of chromosome images. A highlight of the proposed algorithm is the ability to rank this set of centromere candidates and create a centromere confidence metric which may be used in post-detection analysis. When tested with a larger metaphase chromosome database consisting of 1400 chromosomes collected from 40 metaphase cell images, the proposed algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%.", "title": "" }, { "docid": "d86ed46cf03298129055a7a734c0ef3c", "text": "Photosynthetic CO2 uptake rate and early growth parameters of radish Raphanus sativus L. seedlings exposed to an extremely low frequency magnetic field (ELF MF) were investigated. Radish seedlings were exposed to a 60 Hz, 50 microT(rms) (root mean square) sinusoidal magnetic field (MF) and a parallel 48 microT static MF for 6 or 15 d immediately after germination. Control seedlings were exposed to the ambient MF but not the ELF MF. The CO2 uptake rate of ELF MF exposed seedlings on day 5 and later was lower than that of the control seedlings. The dry weight and the cotyledon area of ELF MF exposed seedlings on day 6 and the fresh weight, the dry weight and the leaf area of ELF MF exposed seedlings on day 15 were significantly lower than those of the control seedlings, respectively. In another experiment, radish seedlings were grown without ELF MF exposure for 14 d immediately after germination, and then exposed to the ELF MF for about 2 h, and the photosynthetic CO2 uptake rate was measured during the short-term ELF MF exposure. The CO2 uptake rate of the same seedlings was subsequently measured in the ambient MF (control) without the ELF MF. There was no difference in the CO2 uptake rate of seedlings exposed to the ELF MF or the ambient MF. These results indicate that continuous exposure to 60 Hz, 50 microT(rms) sinusoidal MF with a parallel 48 microT static MF affects the early growth of radish seedlings, but the effect is not so severe that modification of photosynthetic CO2 uptake can observed during short-term MF exposure.", "title": "" }, { "docid": "f398eee40f39acd2c2955287ccbb4924", "text": "One of the ultimate goals of natural language processing (NLP) systems is understanding the meaning of what is being transmitted, irrespective of the medium (e.g., written versus spoken) or the form (e.g., static documents versus dynamic dialogues). Although much work has been done in traditional language domains such as speech and static written text, little has yet been done in the newer communication domains enabled by the Internet, e.g., online chat and instant messaging. This is in part due to the fact that there are no annotated chat corpora available to the broader research community. The purpose of this research is to build a chat corpus, tagged with lexical (token part-of-speech labels), syntactic (post parse tree), and discourse (post classification) information. Such a corpus can then be used to develop more complex, statistical-based NLP applications that perform tasks such as author profiling, entity identification, and social network analysis.", "title": "" }, { "docid": "edbf9ed3377e31d53b7f633a5bfe3ebe", "text": "INTRODUCTION\nAnchorage control in patients with severe skeletal Class II malocclusion is a difficult problem in orthodontic treatment. In adults, treatment often requires premolar extractions and maximum anchorage. Recently, incisor retraction with miniscrew anchorage has become a new strategy for treating skeletal Class II patients.\n\n\nMETHODS\nIn this study, we compared treatment outcomes of patients with severe skeletal Class II malocclusion treated using miniscrew anchorage (n = 11) or traditional orthodontic mechanics of headgear and transpalatal arch (n = 11). Pretreatment and posttreatment lateral cephalograms were analyzed.\n\n\nRESULTS\nBoth treatment methods, miniscrew anchorage or headgear, achieved acceptable results as indicated by the reduction of overjet and the improvement of facial profile. However, incisor retraction with miniscrew anchorage did not require patient cooperation to reinforce the anchorage and provided more significant improvement of the facial profile than traditional anchorage mechanics (headgear combined with transpalatal arch).\n\n\nCONCLUSIONS\nOrthodontic treatment with miniscrew anchorage is simpler and more useful than that with traditional anchorage mechanics for patients with Class II malocclusion.", "title": "" }, { "docid": "c4e11f7bbb252b18910a64c0145edec2", "text": "Cluster analysis represents one of the most versatile methods in statistical science. It is employed in empirical sciences for the summarization of datasets into groups of similar objects, with the purpose of facilitating the interpretation and further analysis of the data. Cluster analysis is of particular importance in the exploratory investigation of data of high complexity, such as that derived from molecular biology or image databases. Consequently, recent work in the field of cluster analysis, including the work presented in this thesis, has focused on designing algorithms that can provide meaningful solutions for data with high cardinality and/or dimensionality, under the natural restriction of limited resources. In the first part of the thesis, a novel algorithm for the clustering of large, highdimensional datasets is presented. The developed method is based on the principles of projection pursuit and grid partitioning, and focuses on reducing computational requirements for large datasets without loss of performance. To achieve that, the algorithm relies on procedures such as sampling of objects, feature selection, and quick density estimation using histograms. The algorithm searches for low-density points in potentially favorable one-dimensional projections, and partitions the data by a hyperplane passing through the best split point found. Tests on synthetic and reference data indicated that the proposed method can quickly and efficiently recover clusters that are distinguishable from the remaining objects on at least one direction; linearly non-separable clusters were usually subdivided. In addition, the clustering solution was proved to be robust in the presence of noise in moderate levels, and when the clusters are partially overlapping. In the second part of the thesis, a novel method for generating synthetic datasets with variable structure and clustering difficulty is presented. The developed algorithm can construct clusters with different sizes, shapes, and orientations, consisting of objects sampled from different probability distributions. In addition, some of the clusters can have multimodal distributions, curvilinear shapes, or they can be defined only in restricted subsets of dimensions. The clusters are distributed within the data space using a greedy geometrical procedure, with the overall degree of cluster overlap adjusted by scaling the clusters. Evaluation tests indicated that the proposed approach is highly effective in prescribing the cluster overlap. Furthermore, it can be extended to allow for the production of datasets containing non-overlapping clusters with defined degrees of separation. In the third part of the thesis, a novel system for the semi-supervised annotation of images is described and evaluated. The system is based on a visual vocabulary of prototype visual features, which is constructed through the clustering of visual features extracted from training images with accurate textual annotations. Consequently, each training image is associated with the visual words representing its detected features. In addition, each such image is associated with the concepts extracted from the linked textual data. These two sets of associations are combined into a direct linkage scheme between textual concepts and visual words, thus constructing an automatic image classifier that can annotate new images with text-based concepts using only their visual features. As an initial application, the developed method was successfully employed in a person classification task.", "title": "" }, { "docid": "85e3992ff97ae284218cf47dcb57abec", "text": "Software has been part of modern society for more than 50 years. There are several software development methodologies in use today. Some companies have their own customized methodology for developing their software but the majority speaks about two kinds of methodologies: heavyweight and lightweight. Heavyweight methodologies, also considered as the traditional way to develop software, claim their support to comprehensive planning, detailed documentation, and expansive design. The lightweight methodologies, also known as agile modeling, have gained significant attention from the software engineering community in the last few years. Unlike traditional methods, agile methodologies employ short iterative cycles, and rely on tacit knowledge within a team as opposed to documentation. In this dissertation, I have described the characteristics of some traditional and agile methodologies that are widely used in software development. I have also discussed the strengths and weakness between the two opposing methodologies and provided the challenges associated with implementing agile processes in the software industry. This anecdotal evidence is rising regarding the effectiveness of agile methodologies in certain environments; but there have not been much collection and analysis of empirical evidence for agile projects. However, to support my dissertation I conducted a questionnaire, soliciting feedback from software industry practitioners to evaluate which methodology has a better success rate for different sizes of software development. According to our findings agile methodologies can provide good benefits for small scaled and medium scaled projects but for large scaled projects traditional methods seem dominant.", "title": "" }, { "docid": "ad4596e24f157653a36201767d4b4f3b", "text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.", "title": "" }, { "docid": "db190bb0cf83071b6e19c43201f92610", "text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.", "title": "" } ]
scidocsrr
7657435a1c6dc611e145a36de5c837fd
Positioning the role of the enterprise architect: An independent study in a mobile telecommunications organisation
[ { "docid": "b6ba25674169b3ffd51b908547405c8c", "text": "Enterprise Architecture (EA) is increasingly being used by large organizations to get a grip on the complexity of their business processes, information systems and technical infrastructure. Although seen as an important instrument to help solve major organizational problems, effectively applying EA seems no easy task. Active participation of EA stakeholders is one of the main critical success factors for EA. This participation depends on the degree in which EA helps stakeholders achieve their individual goals. A", "title": "" } ]
[ { "docid": "68487f024611acabdf6ea15b3a527c6a", "text": "GEODETIC data, obtained by ground- or space-based techniques, can be used to infer the distribution of slip on a fault that has ruptured in an earthquake. Although most geodetic techniques require a surveyed network to be in place before the earthquake1–3, satellite images, when collected at regular intervals, can capture co-seismic displacements without advance knowledge of the earthquake's location. Synthetic aperture radar (SAR) interferometry, first introduced4 in 1974 for topographic mapping5–8 can also be used to detect changes in the ground surface, by removing the signal from the topography9,10. Here we use SAR interferometry to capture the movements produced by the 1992 earthquake in Landers, California11. We construct an interferogram by combining topographic information with SAR images obtained by the ERS-1 satellite before and after the earthquake. The observed changes in range from the ground surface to the satellite agree well with the slip measured in the field, with the displacements measured by surveying, and with the results of an elastic dislocation model. As a geodetic tool, the SAR interferogram provides a denser spatial sampling (100 m per pixel) than surveying methods1–3 and a better precision (∼3 cm) than previous space imaging techniques12,13.", "title": "" }, { "docid": "0f25cfa80ee503aa5012772ac54fb7a3", "text": "Parameter reduction has been an important topic in deep learning due to the everincreasing size of deep neural network models and the need to train and run them on resource limited machines. Despite many efforts in this area, there were no rigorous theoretical guarantees on why existing neural net compression methods should work. In this paper, we provide provable guarantees on some hashing-based parameter reduction methods in neural nets. First, we introduce a neural net compression scheme based on random linear sketching (which is usually implemented efficiently via hashing), and show that the sketched (smaller) network is able to approximate the original network on all input data coming from any smooth and wellconditioned low-dimensional manifold. The sketched network can also be trained directly via back-propagation. Next, we study the previously proposed HashedNets architecture and show that the optimization landscape of one-hidden-layer HashedNets has a local strong convexity property similar to a normal fully connected neural network. We complement our theoretical results with empirical verifications.", "title": "" }, { "docid": "ff7db3cca724a06c594a525b1f229024", "text": "At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.", "title": "" }, { "docid": "5e94e30719ac09e86aaa50d9ab4ad57b", "text": "Blogs, regularly updated online journals, allow people to quickly and easily create and share online content. Most bloggers write about their everyday lives and generally have a small audience of regular readers. Readers interact with bloggers by contributing comments in response to specific blog posts. Moreover, readers of blogs are often bloggers themselves and acknowledge their favorite blogs by adding them to their blogrolls or linking to them in their posts. This paper presents a study of bloggers’ online and real life relationships in three blog communities: Kuwait Blogs, Dallas/Fort Worth Blogs, and United Arab Emirates Blogs. Through a comparative analysis of the social network structures created by blogrolls and blog comments, we find different characteristics for different kinds of links. Our online survey of the three communities reveals that few of the blogging interactions reflect close offline relationships, and moreover that many online relationships were formed through blogging.", "title": "" }, { "docid": "b2c60198f29f734e000dd67cb6bdd08a", "text": "OBJECTIVE\nTo assess adolescents' perceptions about factors influencing their food choices and eating behaviors.\n\n\nDESIGN\nData were collected in focus-group discussions.\n\n\nSUBJECTS/SETTING\nThe study population included 141 adolescents in 7th and 10th grade from 2 urban schools in St Paul, Minn, who participated in 21 focus groups.\n\n\nANALYSIS\nData were analyzed using qualitative research methodology, specifically, the constant comparative method.\n\n\nRESULTS\nFactors perceived as influencing food choices included hunger and food cravings, appeal of food, time considerations of adolescents and parents, convenience of food, food availability, parental influence on eating behaviors (including the culture or religion of the family), benefits of foods (including health), situation-specific factors, mood, body image, habit, cost, media, and vegetarian beliefs. Major barriers to eating more fruits, vegetables, and dairy products and eating fewer high-fat foods included a lack of sense of urgency about personal health in relation to other concerns, and taste preferences for other foods. Suggestions for helping adolescents eat a more healthful diet include making healthful food taste and look better, limiting the availability of unhealthful options, making healthful food more available and convenient, teaching children good eating habits at an early age, and changing social norms to make it \"cool\" to eat healthfully.\n\n\nAPPLICATIONS/CONCLUSIONS\nThe findings suggest that if programs to improve adolescent nutrition are to be effective, they need to address a broad range of factors, in particular environmental factors (e.g., the increased availability and promotion of appealing, convenient foods within homes schools, and restaurants).", "title": "" }, { "docid": "c6bfdc5c039de4e25bb5a72ec2350223", "text": "Free-energy-based reinforcement learning (FERL) can handle Markov decision processes (MDPs) with high-dimensional state spaces by approximating the state-action value function with the negative equilibrium free energy of a restricted Boltzmann machine (RBM). In this study, we extend the FERL framework to handle partially observable MDPs (POMDPs) by incorporating a recurrent neural network that learns a memory representation sufficient for predicting future observations and rewards. We demonstrate that the proposed method successfully solves POMDPs with high-dimensional observations without any prior knowledge of the environmental hidden states and dynamics. After learning, task structures are implicitly represented in the distributed activation patterns of hidden nodes of the RBM.", "title": "" }, { "docid": "9cb033c92c06f804118381f61dd884f9", "text": "Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute a mean and variance which are then used to normalize the summed input to that neuron on each training case. This significantly reduces the training time in feedforward neural networks. However, the effect of batch normalization is dependent on the mini-batch size and it is not obvious how to apply it to recurrent neural networks. In this paper, we transpose batch normalization into layer normalization by computing the mean and variance used for normalization from all of the summed inputs to the neurons in a layer on a single training case. Like batch normalization, we also give each neuron its own adaptive bias and gain which are applied after the normalization but before the non-linearity. Unlike batch normalization, layer normalization performs exactly the same computation at training and test times. It is also straightforward to apply to recurrent neural networks by computing the normalization statistics separately at each time step. Layer normalization is very effective at stabilizing the hidden state dynamics in recurrent networks. Empirically, we show that layer normalization can substantially reduce the training time compared with previously published techniques.", "title": "" }, { "docid": "f5648e3bd38e876b53ee748021e165f2", "text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "f8821f651731943ce1652bc8a1d2c0d6", "text": "business units and thus not even practiced in a cohesive, coherent manner. In the worst cases, busy business unit executives trade roving bands of developers like Pokémon cards in a fifth-grade classroom (in an attempt to get ahead). Suffice it to say, none of this is good. The disconnect between security and development has ultimately produced software development efforts that lack any sort of contemporary understanding of technical security risks. Today's complex and highly connected computing environments trigger myriad security concerns, so by blowing off the idea of security entirely, software builders virtually guarantee that their creations will have way too many security weaknesses that could—and should—have been avoided. This article presents some recommendations for solving this problem. Our approach is born out of experience in two diverse fields: software security and information security. Central among our recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts. Don't stand so close to me Best practices in software security include a manageable number of simple activities that should be applied throughout any software development process (see Figure 1). These lightweight activities should start at the earliest stages of software development and then continue throughout the development process and into deployment and operations. Although an increasing number of software shops and individual developers are adopting the software security touchpoints we describe here as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnera-bilities, and so on. Put in this position , even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Although recent books 1,2 are starting to turn this knowledge gap around, the science of attack is a novel one. Information security staff—in particular, incident handlers and vulnerability/patch specialists— have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they've studied software vulnerabili-ties and their resulting attack profiles in minute detail. However, few information security professionals are software developers (at least, on a full-time basis), and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It's very rare to find information security …", "title": "" }, { "docid": "1acbb63a43218d216a2e850d9b3d3fa1", "text": "In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner.", "title": "" }, { "docid": "6eed8af8f6f65583e89cdd44e8d8844b", "text": "Natural language processing (NLP), or the pragmatic research perspective of computational linguistics, has become increasingly powerful due to data availability and various techniques developed in the past decade. This increasing capability makes it possible to capture sentiments more accurately and semantics in a more nuanced way. Naturally, many applications are starting to seek improvements by adopting cutting-edge NLP techniques. Financial forecasting is no exception. As a result, articles that leverage NLP techniques to predict financial markets are fast accumulating, gradually establishing the research field of natural language based financial forecasting (NLFF), or from the application perspective, stock market prediction. This review article clarifies the scope of NLFF research by ordering and structuring techniques and applications from related work. The survey also aims to increase the understanding of progress and hotspots in NLFF, and bring about discussions across many different disciplines.", "title": "" }, { "docid": "a900a7b1b6eff406fa42906ec5a31597", "text": "From wearables to smart appliances, the Internet of Things (IoT) is developing at a rapid pace. The challenge is to find the best fitting solution within a range of different technologies that all may be appropriate at the first sight to realize a specific embedded device. A single tool for measuring power consumption of various wireless technologies and low power modes helps to optimize the development process of modern IoT systems. In this paper, we present an accurate but still cost-effective measurement solution for tracking the highly dynamic power consumption of wireless embedded systems. We extended the conventional measurement of a single shunt resistor's voltage drop by using a dual shunt resistor stage with an automatic switch-over between two stages, which leads to a large dynamic measurement range from μA up to several hundreds mA. To demonstrate the usability of our simple-to-use power measurement system different use cases are presented. Using two independent current measurement channels allows to evaluate the timing relation of proprietary RF communication. Furthermore a forecast is given on the expected battery lifetime of a Wifi-based data acquisition system using measurement results of the presented tool.", "title": "" }, { "docid": "2775a86ccd25e8eb4059991e9bb4d2ad", "text": "The evolution of CRISPR–cas loci, which encode adaptive immune systems in archaea and bacteria, involves rapid changes, in particular numerous rearrangements of the locus architecture and horizontal transfer of complete loci or individual modules. These dynamics complicate straightforward phylogenetic classification, but here we present an approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR–cas loci into distinct classes, types and subtypes. The new classification retains the overall structure of the previous version but is expanded to now encompass two classes, five types and 16 subtypes. The relative stability of the classification suggests that the most prevalent variants of CRISPR–Cas systems are already known. However, the existence of rare, currently unclassifiable variants implies that additional types and subtypes remain to be characterized.", "title": "" }, { "docid": "06ddb465297deb903dc812607d9d7c95", "text": "Today’s speed of image processing tools as well as the availability of robust techniques for extracting geometric and basic thematic information from image streams makes real-time photogrammetry possible. The paper discusses the basic tools for fully automatic calibration, orientation and surface recontruction as well as for tracking, ego-motion determination and behaviour analysis. Examples demonstrate today’s potential for future applications.", "title": "" }, { "docid": "fbff176c8731cdb9dcbf354cf72b3148", "text": "Polar code, newly formulated by Erdal Arikan, has got a wide recognition from the information theory community. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper, we propose efficient hardware architecture on a FPGA platform using Xilinx Virtex VI for implementing the advanced encoding and decoding schemes. The performance of the proposed architecture out performs the existing techniques such as: successive cancellation decoder, list successive cancellation, belief propagation etc; with respect to bit error rate and resource utilization.", "title": "" }, { "docid": "afce201838e658aac3e18c2f26cff956", "text": "With the current set of design tools and methods available to game designers, vast portions of the space of possible games are not currently reachable. In the past, technological advances such as improved graphics and new controllers have driven the creation of new forms of gameplay, but games have still not made great strides into new gameplay experiences. We argue that the development of innovative artificial intelligence (AI) systems plays a crucial role in the exploration of currently unreachable spaces. To aid in exploration, we suggest a practice called AI-based game design, an iterative design process that deeply integrates the affordances of an AI system within the context of game design. We have applied this process in our own projects, and in this paper we present how it has pushed the boundaries of current game genres and experiences, as well as discuss the future AI-based game design.", "title": "" }, { "docid": "c1632ead357d08c3e019bb12ff75e756", "text": "Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness, and discuss future directions in this area.", "title": "" }, { "docid": "c42c3eb5c431fb3e3c588613859c241e", "text": "-This paper presents a monitoring and control system for greenhouse through Internet of Things(IOT). The system will monitor the various environmental conditions such as humidity, soil moisture, temperature, presence of fire, etc. If any condition crosses certain limits, a message will be sent to the registered number through GSM module. The microcontroller will automatically turn on the motor if the soil moisture is less than a particular value. A color sensor will sense the color of the leaves and send message. The prototype was tested under various combinations of inputs in our laboratory and the experimental results were found as expected. KeywordsGSM module, microcontroller, sensors,", "title": "" } ]
scidocsrr
7baf097c4000f7eb2e46405ac91aff69
Neuroprotective potential of silymarin against CNS disorders: insight into the pathways and molecular mechanisms of action.
[ { "docid": "9c7a1ef4f29bbf433fb99f5c160f715c", "text": "Silymarin, a flavonolignan from 'milk thistle' (Silybum marianum) plant is used almost exclusively for hepatoprotection and amounts to 180 million US dollars business in Germany alone. In this review we discuss about its safety, efficacy and future uses in liver diseases. The use of silymarin may replace the polyherbal formulations and will avoid the major problems of standardization, quality control and contamination with heavy metals or bacterial toxins. Silymarin consists of four flavonolignan isomers namely--silybin, isosilybin, silydianin and silychristin. Among them, silybin being the most active and commonly used. Silymarin is orally absorbed and is excreted mainly through bile as sulphates and conjugates. Silymarin offers good protection in various toxic models of experimental liver diseases in laboratory animals. It acts by antioxidative, anti-lipid peroxidative, antifibrotic, anti-inflammatory, membrane stabilizing, immunomodulatory and liver regenerating mechanisms. Silymarin has clinical applications in alcoholic liver diseases, liver cirrhosis, Amanita mushroom poisoning, viral hepatitis, toxic and drug induced liver diseases and in diabetic patients. Though silymarin does not have antiviral properties against hepatitis virus, it promotes protein synthesis, helps in regenerating liver tissue, controls inflammation, enhances glucuronidation and protects against glutathione depletion. Silymarin may prove to be a useful drug for hepatoprotection in hepatobiliary diseases and in hepatotoxicity due to drugs. The non traditional use of silymarin may make a breakthrough as a new approach to protect other organs in addition to liver. As it is having a good safety profile, better patient tolerability and an effective drug at an affordable price, in near future new derivatives or new combinations of this drug may prove to be useful.", "title": "" } ]
[ { "docid": "8844f14e92e2c4aa7df276505af8b7fe", "text": "Tensor completion is a powerful tool used to estimate or recover missing values in multi-way data. It has seen great success in domains such as product recommendation and healthcare. Tensor completion is most often accomplished via low-rank sparse tensor factorization, a computationally expensive non-convex optimization problem which has only recently been studied in the context of parallel computing. In this work, we study three optimization algorithms that have been successfully applied to tensor completion: alternating least squares (ALS), stochastic gradient descent (SGD), and coordinate descent (CCD++). We explore opportunities for parallelism on shared- and distributed-memory systems and address challenges such as memory- and operation-efficiency, load balance, cache locality, and communication. Among our advancements are an SGD algorithm which combines stratification with asynchronous communication, an ALS algorithm rich in level-3 BLAS routines, and a communication-efficient CCD++ algorithm. We evaluate our optimizations on a variety of real datasets using a modern supercomputer and demonstrate speedups through 1024 cores. These improvements effectively reduce time-to-solution from hours to seconds on real-world datasets. We show that after our optimizations, ALS is advantageous on parallel systems of small-to-moderate scale, while both ALS and CCD++ will provide the lowest time-to-solution on large-scale distributed systems.", "title": "" }, { "docid": "55eb8b24baa00c38534ef0020c682fff", "text": "NoSQL databases are designed to manage large volumes of data. Although they do not require a default schema associated with the data, they are categorized by data models. Because of this, data organization in NoSQL databases needs significant design decisions because they affect quality requirements such as scalability, consistency and performance. In traditional database design, on the logical modeling phase, a conceptual schema is transformed into a schema with lower abstraction and suitable to the target database data model. In this context, the contribution of this paper is an approach for logical design of NoSQL document databases. Our approach consists in a process that converts a conceptual modeling into efficient logical representations for a NoSQL document database. Workload information is considered to determine an optimized logical schema, providing a better access performance for the application. We evaluate our approach through a case study in the e-commerce domain and demonstrate that the NoSQL logical structure generated by our approach reduces the amount of items accessed by the application queries.", "title": "" }, { "docid": "59a25ae61a22baa8e20ae1a5d88c4499", "text": "This paper tackles a major privacy threat in current location-based services where users have to report their exact locations to the database server in order to obtain their desired services. For example, a mobile user asking about her nearest restaurant has to report her exact location. With untrusted service providers, reporting private location information may lead to several privacy threats. In this paper, we present a peer-to-peer (P2P)spatial cloaking algorithm in which mobile and stationary users can entertain location-based services without revealing their exact location information. The main idea is that before requesting any location-based service, the mobile user will form a group from her peers via single-hop communication and/or multi-hop routing. Then,the spatial cloaked area is computed as the region that covers the entire group of peers. Two modes of operations are supported within the proposed P2P s patial cloaking algorithm, namely, the on-demand mode and the proactive mode. Experimental results show that the P2P spatial cloaking algorithm operated in the on-demand mode has lower communication cost and better quality of services than the proactive mode, but the on-demand incurs longer response time.", "title": "" }, { "docid": "9345f8c567c28caa918417eba901482c", "text": "Friction stir welding (FSW) is a widely used solid state joining process for soft materials such as aluminium alloys because it avoids many of the common problems of fusion welding. Commercial feasibility of the FSW process for harder alloys such as steels and titanium alloys awaits the development of cost effective and durable tools which lead to structurally sound welds consistently. Material selection and design profoundly affect the performance of tools, weld quality and cost. Here we review and critically examine several important aspects of FSW tools such as tool material selection, geometry and load bearing ability, mechanisms of tool degradation and process economics.", "title": "" }, { "docid": "339efad8a055a90b43abebd9a4884baa", "text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.", "title": "" }, { "docid": "c8241a3f73edaff7094e09e3a06fda43", "text": "This paper describes a distributed, linear-time algorithm for localizing sensor network nodes in the presence of range measurement noise and demonstrates the algorithm on a physical network. We introduce the probabilistic notion of robust quadrilaterals as a way to avoid flip ambiguities that otherwise corrupt localization computations. We formulate the localization problem as a two-dimensional graph realization problem: given a planar graph with approximately known edge lengths, recover the Euclidean position of each vertex up to a global rotation and translation. This formulation is applicable to the localization of sensor networks in which each node can estimate the distance to each of its neighbors, but no absolute position reference such as GPS or fixed anchor nodes is available.\n We implemented the algorithm on a physical sensor network and empirically assessed its accuracy and performance. Also, in simulation, we demonstrate that the algorithm scales to large networks and handles real-world deployment geometries. Finally, we show how the algorithm supports localization of mobile nodes.", "title": "" }, { "docid": "4e363eb0921ed455fba82cd3db9289da", "text": "Most commercial manufacturers of industrial robots require their robots to be programmed in a proprietary language tailored to the domain – a typical domain-specific language (DSL). However, these languages oftentimes suffer from shortcomings such as controller-specific design, limited expressiveness and a lack of extensibility. For that reason, we developed the extensible Robotics API for programming industrial robots on top of a general-purpose language. Although being a very flexible approach to programming industrial robots, a fully-fledged language can be too complex for simple tasks. Additionally, legacy support for code written in the original DSL has to be maintained. For these reasons, we present a lightweight implementation of a typical robotic DSL, the KUKA Robot Language (KRL), on top of our Robotics API. This work deals with the challenges in reverse-engineering the language and mapping its specifics to the Robotics API. We introduce two different approaches of interpreting and executing KRL programs: tree-based and bytecode-based interpretation.", "title": "" }, { "docid": "4be9fa4277bf0407d09feff8f4c433d0", "text": "This paper tackles the problem of learning a dialog policy from example dialogs - for example, from Wizard-of-Oz style dialogs, where an expert (person) plays the role of the system. Learning in this setting is challenging because dialog is a temporal process in which actions affect the future course of the conversation - i.e., dialog requires planning. Past work solved this problem with either conventional supervised learning or reinforcement learning. Reinforcement learning provides a principled approach to planning, but requires more resources than a fixed corpus of examples, such as a dialog simulator or a reward function. Conventional supervised learning, by contrast, operates directly from example dialogs but does not take proper account of planning. We introduce a new algorithm called Temporal Supervised Learning which learns directly from example dialogs, while also taking proper account of planning. The key idea is to choose the next dialog action to maximize the expected discounted accuracy until the end of the dialog. On a dialog testbed in the calendar domain, in simulation, we show that a dialog manager trained with temporal supervised learning substantially outperforms a baseline trained using conventional supervised learning.", "title": "" }, { "docid": "5ced8b93ad1fb80bb0c5324d34af9269", "text": "This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.", "title": "" }, { "docid": "0be273eb8dfec6a6f71a44f38e8207ba", "text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.", "title": "" }, { "docid": "dc2f4cbd2c18e4f893750a0a1a40002b", "text": "A microstrip half-grid array antenna (HGA) based on low temperature co-fired ceramic (LTCC) technology is presented in this paper. The antenna is designed for the 77-81 GHz radar frequency band and uses a high permittivity material (εr = 7.3). The traditional single-grid array antenna (SGA) uses two radiating elements in the H-plane. For applications using digital beam forming, the focusing of an SGA in the scanning plane (H-plane) limits the field of view (FoV) of the radar system and the width of the SGA enlarges the minimal spacing between the adjacent channels. To overcome this, an array antenna using only half of the grid as radiating element was designed. As feeding network, a laminated waveguide with a vertically arranged power divider was adopted. For comparison, both an SGA and an HGA were fabricated. The measured results show: using an HGA, an HPBW increment in the H-plane can be achieved and their beam patterns in the E-plane remain similar. This compact LTCC antenna is suitable for radar application with a large FoV requirement.", "title": "" }, { "docid": "4fd19f75059fd8ec42cea3e70251d90f", "text": "We report the case of C.L., an 8-year-old child who, following the surgical removal of an ependymoma from the left cerebral ventricle at the age of 4 years, developed significant difficulties in retaining day-to-day events and information. A thorough neuropsychological analysis documented in C.L. a severe anterograde amnesic syndrome, characterised by normal short-term memory, but poor performance on episodic long-term memory tests. In particular, C.L. demonstrated virtually no ability to recollect new verbal information several minutes after the presentation. As for semantic memory, C.L. demonstrated general semantic competencies, which, depending on the test, ranged from the level of a 6-year-old girl to a level corresponding to her actual chronological age. Finding a patient who, despite being severely impaired in the ability to recollect new episodic memories, still demonstrates at least partially preserved abilities to acquire new semantic knowledge suggests that neural circuits implicated in the memorisation of autobiographical events and factual information do not overlap completely. This case is examined in the light of growing literature concerned with the dissociation between episodic and semantic memory in childhood amnesia.", "title": "" }, { "docid": "763983ae894e3b98932233ef0b465164", "text": "In the rapidly developing world of information technology, computers have been used in various settings for clinical medicine application. Studies have focused on computerized physician order entry (CPOE) system interface design and functional development to achieve a successful technology adoption process. Therefore, the purpose of this study was to evaluate physician satisfaction with the CPOE system. This survey included user attitude toward interface design, operation functions/usage effectiveness, interface usability, and user satisfaction. We used questionnaires for data collection from June to August 2008, and 225 valid questionnaires were returned with a response rate of 84.5 %. Canonical correlation was applied to explore the relationship of personal attributes and usability with user satisfaction. The results of the data analysis revealed that certain demographic groups showed higher acceptance and satisfaction levels, especially residents, those with less pressure when using computers or those with less experience with the CPOE systems. Additionally, computer use pressure and usability were the best predictors of user satisfaction. Based on the study results, it is suggested that future CPOE development should focus on interface design and content links, as well as providing educational training programs for the new users; since a learning curve period should be considered as an indespensible factor for CPOE adoption.", "title": "" }, { "docid": "2dd3ca2e8e9bc9b6d9ab6d4e8c9c3974", "text": "With the advancement of data acquisition techniques, tensor (multidimensional data) objects are increasingly accumulated and generated, for example, multichannel electroencephalographies, multiview images, and videos. In these applications, the tensor objects are usually nonnegative, since the physical signals are recorded. As the dimensionality of tensor objects is often very high, a dimension reduction technique becomes an important research topic of tensor data. From the perspective of geometry, high-dimensional objects often reside in a low-dimensional submanifold of the ambient space. In this paper, we propose a new approach to perform the dimension reduction for nonnegative tensor objects. Our idea is to use nonnegative Tucker decomposition (NTD) to obtain a set of core tensors of smaller sizes by finding a common set of projection matrices for tensor objects. To preserve geometric information in tensor data, we employ a manifold regularization term for the core tensors constructed in the Tucker decomposition. An algorithm called manifold regularization NTD (MR-NTD) is developed to solve the common projection matrices and core tensors in an alternating least squares manner. The convergence of the proposed algorithm is shown, and the computational complexity of the proposed method scales linearly with respect to the number of tensor objects and the size of the tensor objects, respectively. These theoretical results show that the proposed algorithm can be efficient. Extensive experimental results have been provided to further demonstrate the effectiveness and efficiency of the proposed MR-NTD algorithm.", "title": "" }, { "docid": "6d3924db47747758928c24e13042e875", "text": "BACKGROUND AND OBJECTIVES\nAccidental dural puncture (ADP) during epidural analgesia is a debilitating complication. Symptoms of ADP post-dural puncture headache (PDPH) are headache while rising from supine to upright position, nausea, and neck stiffness. While age, gender and needle characteristics are established risk factors for ADP, little is known about risk factors in laboring women.\n\n\nMETHODS\nAll cases of ADP during epidural analgesia treated with blood-patching during a 3-years period were retrospectively reviewed. Each case was matched to two controls according to delivery period.\n\n\nRESULTS\nForty-nine cases of blood patches after ADP out 17 977 epidural anesthesia procedures were identified (0.27%). No differences were found between cases and controls with regards to body mass index, labor stage at time of epidural, length of second stage, location of epidural along the lumbar vertebrae, anesthesiologist's experience or time when epidural was done. In cases of ADP, significantly lower doses of local anesthetics were injected (10.9 versus 13.5 cc, p < 0.001); anesthesiologists reported significantly more trials of epidurals (70 versus 2.8% more than one trial, p < 0.001), more patient movement during the procedure (13 versus 0%, p < 0.001), more intra-procedure suspicion of ADP (69 versus 0%, p < 0.001) and more cases where CSF/blood was drawn with the syringe (57 versus 2.4%, p < 0.001).\n\n\nCONCLUSION\nADP during labor is a rare but debilitating complication. Risk factors for this iatrogenic complication include patient movement and repeated epidural trials. Intra-procedure identification of ADP is common, allowing early intervention with blood patching where indicated.", "title": "" }, { "docid": "3e18a760083cd3ed169ed8dae36156b9", "text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making", "title": "" }, { "docid": "52c99a0230a309d57a996ffbebf95e22", "text": "Recent distributed denial-of-service attacks demonstrate the high vulnerability of Internet of Things (IoT) systems and devices. Addressing this challenge will require scalable security solutions optimized for the IoT ecosystem.", "title": "" }, { "docid": "58bc5fb67cfb5e4b623b724cb4283a17", "text": "In recent years, power systems have been very difficult to manage as the load demands increase and environment constraints restrict the distribution network. One another mode used for distribution of Electrical power is making use of underground cables (generally in urban areas only) instead of overhead distribution network. The use of underground cables arise a problem of identifying the fault location as it is not open to view as in case of overhead network. To improve the reliability of a distribution system, accurate identification of a faulted segment is required in order to reduce the interruption time during fault. Speedy and precise fault location plays an important role in accelerating system restoration, reducing outage time, reducing great financial loss and significantly improving system reliability. The objective of this paper is to study the methods of determining the distance of underground cable fault from the base station in kilometers. Underground cable system is a common practice followed in major urban areas. While a fault occurs for some reason, at that time the repairing process related to that particular cable is difficult due to exact unknown location of the fault in the cable. In this paper, a technique for detecting faults in underground distribution system is presented. Proposed system is used to find out the exact location of the fault and to send an SMS with details to a remote mobile phone using GSM module.", "title": "" }, { "docid": "3a69d6ef79482d26aee487a964ff797f", "text": "The FPGA compilation process (synthesis, map, placement, routing) is a time-consuming process that limits designer productivity. Compilation time can be reduced by using pre-compiled circuit blocks (hard macros). Hard macros consist of previously synthesized, mapped, placed and routed circuitry that can be relatively placed with short tool runtimes and that make it possible to reuse previous computational effort. Two experiments were performed to demonstrate feasibility that hard macros can reduce compilation time. These experiments demonstrated that an augmented Xilinx flow designed specifically to support hard macros can reduce overall compilation time by 3x. Though the process of incorporating hard macros in designs is currently manual and error-prone, it can be automated to create compilation flows with much lower compilation time.", "title": "" } ]
scidocsrr
92d3b0224664e42bf84731db7af16336
Discovery of user behavior patterns from geo-tagged micro-blogs
[ { "docid": "8718d91f37d12b8ff7658723a937ea84", "text": "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).\n To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and/or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.", "title": "" } ]
[ { "docid": "7ec12c0bf639c76393954baae196a941", "text": "Honeynets have now become a standard part of security measures within the organization. Their purpose is to protect critical information systems and information; this is complemented by acquisition of information about the network threats, attackers and attacks. It is very important to consider issues affecting the deployment and usage of the honeypots and honeynets. This paper discusses the legal issues of honeynets considering their generations. Paper focuses on legal issues of core elements of honeynets, especially data control, data capture and data collection. Paper also draws attention on the issues pertaining to privacy and liability. The analysis of legal issues is based on EU law and it is supplemented by a review of the research literature, related to legal aspects of honeypots and honeynets.", "title": "" }, { "docid": "5f516d2453d976d015ae28149892af43", "text": "This two-part study integrates a quantitative review of one year of US newspaper coverage of climate science with a qualitative, comparative analysis of media-created themes and frames using a social constructivist approach. In addition to an examination of newspaper articles, this paper includes a reflexive comparison with attendant wire stories and scientific texts. Special attention is given to articles constructed with and framed by rhetoric emphasising uncertainty, controversy, and climate scepticism. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "aad449f4f3669c5bf65222b0e709a479", "text": "In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-ofthe-art results on CCG supertagging and comparable results on POS tagging.", "title": "" }, { "docid": "ffca07962ddcdfa0d016df8020488b5d", "text": "Differential-drive mobile robots are usually equipped with video-cameras for navigation purposes. In order to ensure proper operational capabilities of such systems, several calibration steps are required to estimate the following quantities: the video-camera intrinsic and extrinsic parameters, the relative pose between the camera and the vehicle frame and, finally, the odometric parameters of the vehicle. In this paper the simultaneous estimation of the above mentioned quantities is achieved by a systematic and effective calibration procedure that does not require any iterative step. The calibration procedure needs only on-board measurements given by the wheels encoders, the camera and a number of properly taken camera snapshots of a set of known landmarks. Numerical simulations and experimental results with a mobile robot Khepera III equipped with a low-cost camera confirm the effectiveness of the proposed technique.", "title": "" }, { "docid": "1fe1513d5b94ad9ca5a9ae4691407ae5", "text": "We constructed a face database PF01(Postech Faces '01). PF01 contains the true-color face images of 103 people, 53 men and 50 women, representing 17 various images (1 normal face, 4 illumination variations, 8 pose variations, 4 expression variations) per person. All of the people in the database are Asians. There are three kinds of systematic variations, such as illumination, pose, and expression variations in the database. The database is expected to be used to evaluate the technology of face recognition for Asian people or for people with systematic variations.", "title": "" }, { "docid": "d2f2137602149b5062f60e7325d3610f", "text": "Recently a revision of the cell theory has been proposed, which has several implications both for physiology and pathology. This revision is founded on adapting the old Julius von Sach’s proposal (1892) of the Energide as the fundamental universal unit of eukaryotic life. This view maintains that, in most instances, the living unit is the symbiotic assemblage of the cell periphery complex organized around the plasma membrane, some peripheral semi-autonomous cytosol organelles (as mitochondria and plastids, which may be or not be present), and of the Energide (formed by the nucleus, microtubules, and other satellite structures). A fundamental aspect is the proposal that the Energide plays a pivotal and organizing role of the entire symbiotic assemblage (see Appendix 1). The present paper discusses how the Energide paradigm implies a revision of the concept of the internal milieu. As a matter of fact, the Energide interacts with the cytoplasm that, in turn, interacts with the interstitial fluid, and hence with the medium that has been, classically, known as the internal milieu. Some implications of this aspect have been also presented with the help of a computational model in a mathematical Appendix 2 to the paper. Finally, relevances of the Energide concept for the information handling in the central nervous system are discussed especially in relation to the inter-Energide exchange of information.", "title": "" }, { "docid": "86da740a4eab22a9914c72376ae4e9e3", "text": "A new system for automatic detection of angry speech is proposed. Using simulation of far-end-noise-corrupted telephone speech and the widely used Berlin database of emotional speech, autoregressive prediction of features across speech frames is shown to contribute significantly to both the clean speech performance and the robustness of the system. The autoregressive models are learned from the training data in order to capture long-term temporal dynamics of the features. Additionally, linear predictive spectrum analysis outperforms conventional Fourier spectrum analysis in terms of robustness in the computation of mel-frequency cepstral coefficients in the feature extraction stage.", "title": "" }, { "docid": "9b1e1e91b8aacd1ed5d1aee823de7fd3", "text": "—This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.", "title": "" }, { "docid": "a53b01068794a32e5a68f493db4978e7", "text": "The programming language Alphard is designed to provide support for both the methodologies of “well-structured” programming and the techniques of formal program verification. Language constructs allow a programmer to isolate an abstraction, specifying its behavior publicly while localizing knowledge about its implementation. The verification of such an abstraction consists of showing that its implementation behaves in accordance with its public specifications; the abstraction can then be used with confidence in constructing other programs, and the verification of that use employs only the public specifications.\n This paper introduces Alphard by developing and verifying a data structure definition and a program that uses it. It shows how each language construct contributes to the development of the abstraction and discusses the way the language design and the verification methodology were tailored to each other. It serves not only as an introduction to Alphard, but also as an example of the symbiosis between verification and methodology in language design. The strategy of program structuring, illustrated for Alphard, is also applicable to most of the “data abstraction” mechanisms now appearing.", "title": "" }, { "docid": "ea278850f00c703bdd73957c3f7a71ce", "text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.", "title": "" }, { "docid": "eece6349d77b415115fa6afbbbd85190", "text": "BACKGROUND\nAcute appendicitis is the most common cause of acute abdomen. Approximately 7% of the population will be affected by this condition during full life. The development of AIR score may contribute to diagnosis associating easy clinical criteria and two simple laboratory tests.\n\n\nAIM\nTo evaluate the score AIR (Appendicitis Inflammatory Response score) as a tool for the diagnosis and prediction of severity of acute appendicitis.\n\n\nMETHOD\nWere evaluated all patients undergoing surgical appendectomy. From 273 patients, 126 were excluded due to exclusion criteria. All patients were submitted o AIR score.\n\n\nRESULTS\nThe value of the C-reactive protein and the percentage of leukocytes segmented blood count showed a direct relationship with the phase of acute appendicitis.\n\n\nCONCLUSION\nAs for the laboratory criteria, serum C-reactive protein and assessment of the percentage of the polymorphonuclear leukocytes count were important to diagnosis and disease stratification.", "title": "" }, { "docid": "b7cc4a094988643e65d80d4989276d98", "text": "In this paper, we describe the design and layout of an automotive radar sensor demonstrator for 77 GHz with a SiGe chipset and a fully parallel receiver architecture which is capable of digital beamforming and superresolution direction of arrival estimation methods in azimuth. Additionally, we show measurement results of this radar sensor mounted on a test vehicle.", "title": "" }, { "docid": "f1af43d599df760cbb6985367a2e5a20", "text": "A processing pipeline to align, merge, denoise, and enhance low-light images taken on iPhone ’burst’ mode was proposed and tested. Patch-based and corner-based registered images were found to perform better than a single reference. Low-light illumination mapping, dehazing, and exposure fusion further enhanced results.", "title": "" }, { "docid": "12b8dac3e97181eb8ca9c0406f2fa456", "text": "INTRODUCTION\nThis paper discusses some of the issues and challenges of implementing appropriate and coordinated District Health Management Information System (DHMIS) in environments dependent on external support especially when insufficient attention has been given to the sustainability of systems. It also discusses fundamental issues which affect the usability of DHMIS to support District Health System (DHS), including meeting user needs and user education in the use of information for management; and the need for integration of data from all health-providing and related organizations in the district.\n\n\nMETHODS\nThis descriptive cross-sectional study was carried out in three DHSs in Kenya. Data was collected through use of questionnaires, focus group discussions and review of relevant literature, reports and operational manuals of the studied DHMISs.\n\n\nRESULTS\nKey personnel at the DHS level were not involved in the development and implementation of the established systems. The DHMISs were fragmented to the extent that their information products were bypassing the very levels they were created to serve. None of the DHMISs was computerized. Key resources for DHMIS operation were inadequate. The adequacy of personnel was 47%, working space 40%, storage space 34%, stationery 20%, 73% of DHMIS staff were not trained, management support was 13%. Information produced was 30% accurate, 19% complete, 26% timely, 72% relevant; the level of confidentiality and use of information at the point of collection stood at 32% and 22% respectively and information security at 48%. Basic DHMIS equipment for information processing was not available. This inhibited effective and efficient provision of information services.\n\n\nCONCLUSIONS\nAn effective DHMIS is essential for DHS planning, implementation, monitoring and evaluation activities. Without accurate, timely, relevant and complete information the existing information systems are not capable of facilitating the DHS managers in their day-today operational management. The existing DHMISs were found not supportive of the DHS managers' strategic and operational management functions. Consequently DHMISs were found to be plagued by numerous designs, operational, resources and managerial problems. There is an urgent need to explore the possibilities of computerizing the existing manual systems to take advantage of the potential uses of microcomputers for DHMIS operations within the DHS. Information system designers must also address issues of cooperative partnership in information activities, systems compatibility and sustainability.", "title": "" }, { "docid": "759bb2448f1d34d3742fec38f273135e", "text": "Although below-knee prostheses have been commercially available for some time, today's devices are completely passive, and consequently, their mechanical properties remain fixed with walking speed and terrain. A lack of understanding of the ankle-foot biomechanics and the dynamic interaction between an amputee and a prosthesis is one of the main obstacles in the development of a biomimetic ankle-foot prosthesis. In this paper, we present a novel ankle-foot emulator system for the study of human walking biomechanics. The emulator system is comprised of a high performance, force-controllable, robotic ankle-foot worn by an amputee interfaced to a mobile computing unit secured around his waist. We show that the system is capable of mimicking normal ankle-foot walking behaviour. An initial pilot study supports the hypothesis that the emulator may provide a more natural gait than a conventional passive prosthesis", "title": "" }, { "docid": "cecf6608b4ce67487dff08fad0d23245", "text": "Cryptographic Schemes based on Elliptic Curve Pairings: Contributions to Public Key Cryptography and Key Agreement Protocols This thesis introduces the concept of certificateless public key cryptography (CLPKC). Elliptic curve pairings are then used to make concrete CL-PKC schemes and are also used to make other efficient key agreement protocols. CL-PKC can be viewed as a model for the use of public key cryptography that is intermediate between traditional certificated PKC and ID-PKC. This is because, in contrast to traditional public key cryptographic systems, CL-PKC does not require the use of certificates to guarantee the authenticity of public keys. It does rely on the use of a trusted authority (TA) who is in possession of a master key. In this respect, CL-PKC is similar to identity-based public key cryptography (ID-PKC). On the other hand, CL-PKC does not suffer from the key escrow property that is inherent in ID-PKC. Applications for the new infrastructure are discussed. We exemplify how CL-PKC schemes can be constructed by constructing several certificateless public key encryption schemes and modifying other existing ID based schemes. The lack of certificates and the desire to prove the schemes secure in the presence of an adversary who has access to the master key or has the ability to replace public keys, requires the careful development of new security models. We prove that some of our schemes are secure, provided that the Bilinear Diffie-Hellman Problem is hard. We then examine Joux’s protocol [90], which is a one round, tripartite key agreement protocol that is more bandwidth-efficient than any previous three-party key agreement protocol, however, Joux’s protocol is insecure, suffering from a simple man-in-the-middle attack. We shows how to make Joux’s protocol secure, presenting several tripartite, authenticated key agreement protocols that still require only one round of communication. The security properties of the new protocols are studied. Applications for the protocols are also discussed.", "title": "" }, { "docid": "3473e863d335725776281fe2082b756f", "text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.", "title": "" }, { "docid": "dd7d17c7f36f74ea79832f9426dc936d", "text": "In the context of the emerging Semantic Web and the quest for a common logical framework underpinning its architecture, the relation of rule-based languages such as Answer Set Programming (ASP) and ontology languages such as OWL has attracted a lot of attention in the literature over the past years. With its roots in Deductive Databases and Datalog though, ASP shares much more commonality with another Semantic Web standard, namely the query language SPARQL. In this paper, we take the recent approval of the SPARQL1.1 standard by the World Wide Web consortium (W3C) as an opportunity to introduce this standard to the Logic Programming community by providing a translation of SPARQL1.1 into ASP. In this translation, we explain and highlight peculiarities of the new W3C standard. Along the way, we survey existing literature on foundations of SPARQL and SPARQL1.1, and also combinations of SPARQL with ontology and rules languages. Thereby, apart from providing means to implement and support SPARQL natively within Logic Programming engines and particularly ASP engines, we hope to pave the way for further research on a common logical framework for Semantic Web languages, including query languages, from an ASP point of view. 1Vienna University of Economics and Business (WU Wien), Welthandelsplatz 1, 1020 Vienna, Austria E-mail: [email protected] 2Institute for Information Systems 184/2, Technische Universität Wien, Favoritenstrasse 9-11, 1040 Vienna, Austria. E-mail: [email protected] A journal version of this article has been published in JANCL. Please cite as: A. Polleres and J.P. Wallner. On the relation between SPARQL1.1 and Answer Set Programming. Journal of Applied Non-Classical Logics (JANCL), 23(1-2):159-212, 2013. Special issue on Equilibrium Logic and Answer Set Programming. Copyright c © 2014 by the authors TECHNICAL REPORT DBAI-TR-2013-84 2", "title": "" }, { "docid": "c1b397f87922db9205fba59ce5ec593c", "text": "Alterations in the hypocretin receptor 2 and preprohypocretin genes produce narcolepsy in animal models. Hypocretin was undetectable in seven out of nine people with narcolepsy, indicating abnormal hypocretin transmission.", "title": "" }, { "docid": "a8c1db92c2c59afa179c26bdda58f3f6", "text": "Autonomous vehicles for consumer use on public roadways are an active area of research and development by various academic and industry groups. One approach to this problem, taken by Google for example, involves creating a 3-D high-resolution map of the desired route, so the vehicle has full awareness of all static features of the route. When autonomously driving this route, the vehicle employs a high precision LIDAR sensor to generate a 3-D point cloud of its surroundings, which is then used to localize the vehicle on the high-resolution map and detect dynamic objects, such as other cars and pedestrians. This approach has proven to be very successful, but it relies on mapping all desired routes before hand, it requires an expensive LIDAR sensor, and performance suffers when the mapped environment changes, due to construction or snow for example. An alternative approach would be to gather all necessary inputs to complete the autonomous driving task on the fly. However, this requires more sensing capability and sensors that are robust to changes in the environment. An approach taken by many auto manufacturers is to limit autonomous driving to highway only, as this provides a structured environment that reduces the sensing requirement. Additionally, vision sensors are preferred over LIDAR due to cost and maintenance. Current, vision systems for vehicles provide lane estimates but performance can degrade due to poor quality lane marks, difficult lighting conditions, and poor road conditions. In this project we use a deep learning based lane detection algorithm to identify lanes from a vehicle mounted vision sensor. This lane information is then used to localize the vehicle onto a lane level map with a particle filter.", "title": "" } ]
scidocsrr
9ad9b89ff7166204386cd5e63669be8e
Analogical Learning and Reasoning
[ { "docid": "b92484f67bf2d3f71d51aee9fb7abc86", "text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.", "title": "" }, { "docid": "e39cafd4de135ccb17f7cf74cbd38a97", "text": "A central question in metaphor research is how metaphors establish mappings between concepts from different domains. The authors propose an evolutionary path based on structure-mapping theory. This hypothesis--the career of metaphor--postulates a shift in mode of mapping from comparison to categorization as metaphors are conventionalized. Moreover, as demonstrated by 3 experiments, this processing shift is reflected in the very language that people use to make figurative assertions. The career of metaphor hypothesis offers a unified theoretical framework that can resolve the debate between comparison and categorization models of metaphor. This account further suggests that whether metaphors are processed directly or indirectly, and whether they operate at the level of individual concepts or entire conceptual domains, will depend both on their degree of conventionality and on their linguistic form.", "title": "" }, { "docid": "277bdeccc25baa31ba222ff80a341ef2", "text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.", "title": "" } ]
[ { "docid": "83a968fcd2d77de796a8161b6dead9bc", "text": "We introduce a deep learning-based method to generate full 3D hair geometry from an unconstrained image. Our method can recover local strand details and has real-time performance. State-of-the-art hair modeling techniques rely on large hairstyle collections for nearest neighbor retrieval and then perform ad-hoc refinement. Our deep learning approach, in contrast, is highly efficient in storage and can run 1000 times faster while generating hair with 30K strands. The convolutional neural network takes the 2D orientation field of a hair image as input and generates strand features that are evenly distributed on the parameterized 2D scalp. We introduce a collision loss to synthesize more plausible hairstyles, and the visibility of each strand is also used as a weight term to improve the reconstruction accuracy. The encoder-decoder architecture of our network naturally provides a compact and continuous representation for hairstyles, which allows us to interpolate naturally between hairstyles. We use a large set of rendered synthetic hair models to train our network. Our method scales to real images because an intermediate 2D orientation field, automatically calculated from the real image, factors out the difference between synthetic and real hairs. We demonstrate the effectiveness and robustness of our method on a wide range of challenging real Internet pictures, and show reconstructed hair sequences from videos.", "title": "" }, { "docid": "d985c547cd57a25a6724f369da8aa1dd", "text": "DEFINITION A majority of today’s data is constantly evolving and fundam entally distributed in nature. Data for almost any large-sc ale data-management task is continuously collected over a wide area, and at a much greater rate than ever before. Compared to t aditional, centralized stream processing, querying such la rge-scale, evolving data collections poses new challenges , due mainly to the physical distribution of the streaming data and the co mmunication constraints of the underlying network. Distri buted stream processing algorithms should guarantee efficiency n ot o ly in terms ofspaceand processing time(as conventional streaming techniques), but also in terms of the communication loadimposed on the network infrastructure.", "title": "" }, { "docid": "672f86e965ef3b18caa926f2d130931c", "text": "Although we already have many theory-based definitions and procedural descriptions of problem-based learning (PBL), we currently lack anything that could serve as a practical standard, that is an account of the critical practices that make an instructional activity recognizable as PBL. I argue here that the notion of inquiry developed in the writings of the American educational philosopher John Dewey could be useful in illuminating the features of observed interaction that would be relevant to a description of instructional practice. An example is provided based on a segment of recorded interaction in a tutorial group in a problem-based curriculum at a U.S. medical school. Within this segment, a conflict emerges among the students with respect to their planned handling of a case. Through their discussion, the students determine what they would need to know in order to resolve the conflict, or in Dewey’s words to make an \"indeterminate situation determinate.\" The paper calls for additional work to produce a large corpus of fine-grained descriptions of instructional practice from a variety of PBL implementations. Such a collection would provide a basis for the eventual development of a PBL standard.", "title": "" }, { "docid": "e85a0f0edaf18c1f5cd5b6fdbbd464b0", "text": "This paper focuses on the challenging problem of 3D pose estimation of a diverse spectrum of articulated objects from single depth images. A novel structured prediction approach is considered, where 3D poses are represented as skeletal models that naturally operate on manifolds. Given an input depth image, the problem of predicting the most proper articulation of underlying skeletal model is thus formulated as sequentially searching for the optimal skeletal configuration. This is subsequently addressed by convolutional neural nets trained end-to-end to render sequential prediction of the joint locations as regressing a set of tangent vectors of the underlying manifolds. Our approach is examined on various articulated objects including human hand, mouse, and fish benchmark datasets. Empirically it is shown to deliver highly competitive performance with respect to the state-of-the-arts, while operating in real-time (over 30 FPS).", "title": "" }, { "docid": "78454419cd378a8f6d4417e4063835f5", "text": "We present and evaluate a method for automatically detecting sentence fragments in English texts written by non-native speakers. Our method combines syntactic parse tree patterns and parts-of-speech information produced by a tagger to detect this phenomenon. When evaluated on a corpus of authentic learner texts, our best model achieved a precision of 0.84 and a recall of 0.62, a statistically significant improvement over baselines using non-parse features, as well as a popular grammar checker.", "title": "" }, { "docid": "1071d0c189f9220ba59acfca06c5addb", "text": "A 1.6 Gb/s receiver for optical communication has been designed and fabricated in a 0.25-/spl mu/m CMOS process. This receiver has no transimpedance amplifier and uses the parasitic capacitor of the flip-chip bonded photodetector as an integrating element and resolves the data with a double-sampling technique. A simple feedback loop adjusts a bias current to the average optical signal, which essentially \"AC couples\" the input. The resulting receiver resolves an 11 /spl mu/A input, dissipates 3 mW of power, occupies 80 /spl mu/m/spl times/50 /spl mu/m of area and operates at over 1.6 Gb/s.", "title": "" }, { "docid": "45be2fbf427a3ea954a61cfd5150db90", "text": "Linguistic style conveys the social context in which communication occurs and defines particular ways of using language to engage with the audiences to which the text is accessible. In this work, we are interested in the task of stylistic transfer in natural language generation (NLG) systems, which could have applications in the dissemination of knowledge across styles, automatic summarization and author obfuscation. The main challenges in this task involve the lack of parallel training data and the difficulty in using stylistic features to control generation. To address these challenges, we plan to investigate neural network approaches to NLG to automatically learn and incorporate stylistic features in the process of language generation. We identify several evaluation criteria, and propose manual and automatic evaluation approaches.", "title": "" }, { "docid": "3bd5e7005df1f3afbbc70b101708720f", "text": "Lactose is the main carbohydrate in human and mammalian milk. Lactose requires enzymatic hydrolysis by lactase into D-glucose and D-galactose before it can be absorbed. Term infants express sufficient lactase to digest about one liter of breast milk daily. Physiological lactose malabsorption in infancy confers beneficial prebiotic effects, including the establishment of Bifidobacterium-rich fecal microbiota. In many populations, lactase levels decline after weaning (lactase non-persistence; LNP). LNP affects about 70% of the world's population and is the physiological basis for primary lactose intolerance (LI). Persistence of lactase beyond infancy is linked to several single nucleotide polymorphisms in the lactase gene promoter region on chromosome 2. Primary LI generally does not manifest clinically before 5 years of age. LI in young children is typically caused by underlying gut conditions, such as viral gastroenteritis, giardiasis, cow's milk enteropathy, celiac disease or Crohn's disease. Therefore, LI in childhood is mostly transient and improves with resolution of the underlying pathology. There is ongoing confusion between LI and cow's milk allergy (CMA) which still leads to misdiagnosis and inappropriate dietary management. In addition, perceived LI may cause unnecessary milk restriction and adverse nutritional outcomes. The treatment of LI involves the reduction, but not complete elimination, of lactose-containing foods. By contrast, breastfed infants with suspected CMA should undergo a trial of a strict cow's milk protein-free maternal elimination diet. If the infant is not breastfed, an extensively hydrolyzed or amino acid-based formula and strict cow's milk avoidance are the standard treatment for CMA. The majority of infants with CMA can tolerate lactose, except when an enteropathy with secondary lactase deficiency is present.", "title": "" }, { "docid": "492b99428b8c0b4a5921c78518fece50", "text": "Over the past few decades, significant progress has been made in clustering high-dimensional data sets distributed around a collection of linear and affine subspaces. This article presented a review of such progress, which included a number of existing subspace clustering algorithms together with an experimental evaluation on the motion segmentation and face clustering problems in computer vision.", "title": "" }, { "docid": "7bf0b158d9fa4e62b38b6757887c13ed", "text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.", "title": "" }, { "docid": "1847cce79f842a7d01f1f65721c1f007", "text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "title": "" }, { "docid": "439320f5c33c5058b927c93a6445caa6", "text": "Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multisupervised network training technique is developed to constrain the frequency domain information and reconstruction results at different levels. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.", "title": "" }, { "docid": "c01a25190ac617d90506632b64df886b", "text": "Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.", "title": "" }, { "docid": "1690778a3ccfa6d0bf93a848a19e57e3", "text": "F a l l 1. Frau H., Hausfrau, 45 Jahre. Mutter yon 4 Kindern. Lungentuberkulose I. Grades; Tabes dorsalis. S t a t u s beim Eintr i t t : Kleine Frau yon mittlerem Ernii, hrungszustand. Der Thorax ist schleeht entwickelt; Supraund Infraklavikulargruben sind beiderseits tier eingesunken. Die Briiste sind klein, schlaff und h~ngen herunter. Die Mammillen sind sehr stark entwickelt. I. R i i n t g e n b i l d vom 15. 6. 1921. Dorso-ventrale Aufnahme. Es zeigt uns einen schmalen, schlecht entwickelten Thorax. Die I. C. R. sind auf der 1. Seite schm~ler als r. L i n k s : ])er Hilus zeigt einige kleine Schatten, yon denen aus feine Strange nach oben und nach unten verlaufen. Abw~rts neben dem Herzschatten zieht ein derberer Strang. Auf der V. Rippe vorn, ziemlich genau in der Mitre zwischen Wirbelsi~ule und lateraler Thoraxwand findet sich ein fast kreisAbb. 1. runder Schatten yon 1,1 cm Durchmesser. Der Schatten iiberragt die R/~nder der V. Rippe nicht. Um diesen Schatten herum verl~uft ein ca. 1 mm breiter hellerer ringfSrmiger Streifen, auf den nach aul~en der Rippenschatten folgt. Zwei Querfinger unterhalb dieses kleinen Schattens ist der untere Rand der Mamma deutlich sichtbar. ]:)as H e r z ist nach beiden Seiten verbreitert . R e c h t s : Die Spitze ist leicht abgeschattet, der YIilus ausgepr~gter als 1. Naeh unten ziehen einige feine Str/~nge. Im Schatten der V. Rippe vorn finder sieh wie 1. ungef~hr in dvr Mitre zwischen Wirbelsiiule und lateraler Thoraxwand ein dem linksseitigen Schatten entsprechender vollkommen kreisrunder Fleck mit dem ])urchmesser 1,2 cm, der die Rippenr/s nicht iiberragt. Um ihn herum zieht sich ein hellerer Ring, auf den nach aullen der Rippenschatten folgt. Der untere Rand der r. Mamma ist deutlich. W~hrend der 1. Schatten gleichm~flig erscheint, findet sich im Schatten r. in der Mitte eine etwas hellere Partie (Abb. 1).", "title": "" }, { "docid": "7220e44cff27a0c402a8f39f95ca425d", "text": "The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory.", "title": "" }, { "docid": "eb8fd891a197e5a028f1ca5eaf3988a3", "text": "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work.", "title": "" }, { "docid": "1197f02fb0a7e19c3c03c1454704668d", "text": "Exercise 1 Regression and Widrow-Hoff learning Make a function: rline[slope_,intercept_] to generate pairs of random numbers {x,y} where x ranges between 0 and 10, and whose y coordinate is a straight line with slope, slope_ and intercept, intercept_ but perturbed by additive uniform random noise over the range -2 to 2. Generate a data set from rline with 200 samples with slope 11 and intercept 0. Use the function Fit[] to find the slope and intercept of this data set. Here is an example of how it works:", "title": "" }, { "docid": "449bc62a2a92b87019b114ad6d592c02", "text": "A phase-locked clock and data recovery circuit incorporates a multiphase LC oscillator and a quarter-rate bang-bang phase detector. The oscillator is based on differential excitation of a closed-loop transmission line at evenly spaced points, providing half-quadrature phases. The phase detector employs eight flip-flops to sample the input every 12.5 ps, detecting data transitions while retiming and demultiplexing the data into four 10-Gb/s outputs. Fabricated in 0.18m CMOS technology, the circuit produces a clock jitter of 0.9 psrms and 9.67 pspp with a PRBS of2 1 while consuming 144 mW from a 2-V supply.", "title": "" }, { "docid": "144d1ad172d5dd2ca7b3fc93a83b5942", "text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.", "title": "" } ]
scidocsrr
ecc14d8c9b80a0d95c1fed7affcc6a1d
How online social interactions influence customer information contribution behavior in online social shopping communities: A social learning theory perspective
[ { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" } ]
[ { "docid": "b92252ac701b564f17aa36d411f65ecf", "text": "Abstract Image segmentation is a primary step in image analysis used to separate the input image into meaningful regions. MRI is an advanced medical imaging technique widely used in detecting brain tumors. Segmentation of Brain MR image is a complex task. Among the many approaches developed for the segmentation of MR images, a popular method is fuzzy C-mean (FCM). In the proposed method, Artificial Bee Colony (ABC) algorithm is used to improve the efficiency of FCM on abnormal brain images.", "title": "" }, { "docid": "97decda9a345d39e814e19818eebe8b8", "text": "In this review article, we present some challenges and opportunities in Ambient Assisted Living (AAL) for disabled and elderly people addressing various state of the art and recent approaches particularly in artificial intelligence, biomedical engineering, and body sensor networking.", "title": "" }, { "docid": "f0a77d7d6fbae0b701be5e8a869552b1", "text": "The ‘RNA world’ hypothesis describes an early stage of life on Earth, prior to the evolution of coded protein synthesis, in which RNA served as both information carrier and functional catalyst1,2. Not only is there a significant body of evidence to support this hypothesis3, but the ‘ribo-organisms’ from this RNA world are likely to have achieved a significant degree of metabolic sophistication4. From the perspective of the origins of life, the path from pre-life chemistry to the RNA world probably included cycles of template-directed RNA replication, with the RNA templates assembled from prebiotically generated ribonucleotides (Fig. 1)5. RNA seems well suited for the task of replication because its components pair in a complementary fashion. One strand of nucleic acid could thereby serve as a template to direct the polymerization of its complementary strand. Nevertheless, even given abundant ribonucleotides and prebiotically generated RNA templates, significant problems are believed to stand in the way of an experimental demonstration of multiple cycles of RNA replication. For example, non-enzymatic RNA template-copying reactions generate complementary strands that contain 2ʹ,5ʹ-phosphodiester linkages randomly distributed amongst the 3ʹ,5ʹ-linkages (Fig. 2a)6 rather than the solely 3ʹ,5ʹ-linkages found in contemporary biology. This heterogeneity has been generally presumed to preclude the evolution of heritable RNA with functional properties. A second problem with the RNA replication cycle concerns the high ‘melting temperatures’ required to separate the strands of the RNA duplexes formed by template copying7. For example, 14mer RNA duplexes with a high proportion of guanine–cytosine pairs can have melting temperatures well above 90 °C. Such stability would prohibit strand separation and thereby halt progression to the next generation of replication. Yet another difficulty results from the hydrolysis and cyclization reactions of chemically activated mononucleotide and oligonucleotide substrates7, which would deactivate them for template copying. Together, these and other issues have precluded the demonstration of chemically driven RNA replication in the laboratory. Now, two new studies reported in Nature Chemistry — one from the Szostak laboratory and the other from the Sutherland group — offer potential solutions to these problems8,9. Szostak and co-workers have challenged the assumption that backbone homogeneity was a requirement for the primordial RNA replication process, and considered whether RNAs that contain significant levels of 2ʹ,5ʹ-linkages can be tolerated within known ribozymes and aptamers8. They synthesized two well-known functional RNAs — the flavin mononucleotide (FMN) aptamer and hammerhead ribozyme — containing varying amounts (10–50%) of randomly distributed 2ʹ,5ʹ-linkages. Overall, FMN aptamers and hammerhead ribozymes possessing high levels of CHEMICAL ORIGINS OF LIFE", "title": "" }, { "docid": "18cf88b01ff2b20d17590d7b703a41cb", "text": "Human age provides key demographic information. It is also considered as an important soft biometric trait for human identification or search. Compared to other pattern recognition problems (e.g., object classification, scene categorization), age estimation is much more challenging since the difference between facial images with age variations can be more subtle and the process of aging varies greatly among different individuals. In this work, we investigate deep learning techniques for age estimation based on the convolutional neural network (CNN). A new framework for age feature extraction based on the deep learning model is built. Compared to previous models based on CNN, we use feature maps obtained in different layers for our estimation work instead of using the feature obtained at the top layer. Additionally, a manifold learning algorithm is incorporated in the proposed scheme and this improves the performance significantly. Furthermore, we also evaluate different classification and regression schemes in estimating age using the deep learned aging pattern (DLA). To the best of our knowledge, this is the first time that deep learning technique is introduced and applied to solve the age estimation problem. Experimental results on two datasets show that the proposed approach is significantly better than the state-of-the-art.", "title": "" }, { "docid": "614a258877ad160c977a698cdfeac67d", "text": "Research in natural language processing has increasingly focused on normalizing Twitter messages. Currently, while different well-defined approaches have been proposed for the English language, the problem remains far from being solved for other languages, such as Malay. Thus, in this paper, we propose an approach to normalize the Malay Twitter messages based on corpus-driven analysis. An architecture for Malay Tweet normalization is presented, which comprises seven main modules: (1) enhanced tokenization, (2) In-Vocabulary (IV) detection, (3) specialized dictionary query, (4) repeated letter elimination, (5) abbreviation adjusting, (6) English word translation, and (7) de-tokenization. A parallel Tweet dataset, consisting of 9000 Malay Tweets, is used in the development and testing stages. To measure the performance of the system, an evaluation is carried out. The result is promising whereby we score 0.83 in BLEU against the baseline BLEU, which scores 0.46. To compare the accuracy of the architecture with other statistical approaches, an SMT-like normalization system is implemented, trained, and evaluated with an identical parallel dataset. The experimental results demonstrate that we achieve higher accuracy by the normalization system, which is designed based on the features of Malay Tweets, compared to the SMT-like system. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "47d278d37dfd3ab6c0b64dd94eb2de6c", "text": "We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences.", "title": "" }, { "docid": "5eab47907e673449ad73ec6cef30bc07", "text": "Three-dimensional circuits built upon multiple layers of polyimide are required for constructing Si/SiGe monolithic microwave/mm-wave integrated circuits on low resistivity Si wafers. However, the closely spaced transmission lines are susceptible to high levels of cross-coupling, which degrades the overall circuit performance. In this paper, theoretical and experimental results on coupling of Finite Ground Coplanar (FGC) waveguides embedded in polyimide layers are presented for the first time. These results show that FGC lines have approximately 8 dB lower coupling than coupled Coplanar Waveguides. Furthermore, it is shown that the forward and backward coupling characteristics for FGC lines do not resemble the coupling characteristics of other transmission lines such as microstrip.", "title": "" }, { "docid": "53e8333b3e4e9874449492852d948ea2", "text": "In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks.", "title": "" }, { "docid": "81f5c17e5b0b52bb55a27733a198be51", "text": "This paper uses the 'lens' of integrated and sustainable waste management (ISWM) to analyse the new data set compiled on 20 cities in six continents for the UN-Habitat flagship publication Solid Waste Management in the World's Cities. The comparative analysis looks first at waste generation rates and waste composition data. A process flow diagram is prepared for each city, as a powerful tool for representing the solid waste system as a whole in a comprehensive but concise way. Benchmark indicators are presented and compared for the three key physical components/drivers: public health and collection; environment and disposal; and resource recovery--and for three governance strategies required to deliver a well-functioning ISWM system: inclusivity; financial sustainability; and sound institutions and pro-active policies. Key insights include the variety and diversity of successful models - there is no 'one size fits all'; the necessity of good, reliable data; the importance of focusing on governance as well as technology; and the need to build on the existing strengths of the city. An example of the latter is the critical role of the informal sector in the cities in many developing countries: it not only delivers recycling rates that are comparable with modern Western systems, but also saves the city authorities millions of dollars in avoided waste collection and disposal costs. This provides the opportunity for win-win solutions, so long as the related wider challenges can be addressed.", "title": "" }, { "docid": "28b1374bd39b17eb8773d986c532f699", "text": "Recently, indoor positioning systems (IPSs) have been designed to provide location information of persons and devices. The position information enables location-based protocols for user applications. Personal networks (PNs) are designed to meet the users' needs and interconnect users' devices equipped with different communications technologies in various places to form one network. Location-aware services need to be developed in PNs to offer flexible and adaptive personal services and improve the quality of lives. This paper gives a comprehensive survey of numerous IPSs, which include both commercial products and research-oriented solutions. Evaluation criteria are proposed for assessing these systems, namely security and privacy, cost, performance, robustness, complexity, user preferences, commercial availability, and limitations.We compare the existing IPSs and outline the trade-offs among these systems from the viewpoint of a user in a PN.", "title": "" }, { "docid": "42ecca95c15cd1f92d6e5795f99b414a", "text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.", "title": "" }, { "docid": "3a6c58a05427392750d15307fda4faec", "text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.", "title": "" }, { "docid": "07525c300e39dc3de4fda88ce86159c9", "text": "The recording of seizures is of primary interest in the evaluation of epileptic patients. Seizure is the phenomenon of rhythmicity discharge from either a local area or the whole brain and the individual behavior usually lasts from seconds to minutes. Since seizures, in general, occur infrequently and unpredictably, automatic detection of seizures during long-term electroencephalograph (EEG) recordings is highly recommended. As EEG signals are nonstationary, the conventional methods of frequency analysis are not successful for diagnostic purposes. This paper presents a method of analysis of EEG signals, which is based on time-frequency analysis. Initially, selected segments of the EEG signals are analyzed using time-frequency methods and several features are extracted for each segment, representing the energy distribution in the time-frequency plane. Then, those features are used as an input in an artificial neural network (ANN), which provides the final classification of the EEG segments concerning the existence of seizures or not. We used a publicly available dataset in order to evaluate our method and the evaluation results are very promising indicating overall accuracy from 97.72% to 100%.", "title": "" }, { "docid": "dd5883895261ad581858381bec1b92eb", "text": "PURPOSE\nTo establish the validity and reliability of a new vertical jump force test (VJFT) for the assessment of bilateral strength asymmetry in a total of 451 athletes.\n\n\nMETHODS\nThe VJFT consists of countermovement jumps with both legs simultaneously: one on a single force platform, the other on a leveled wooden platform. Jumps with the right or the left leg on the force platform were alternated. Bilateral strength asymmetry was calculated as [(stronger leg - weaker leg)/stronger leg] x 100. A positive sign indicates a stronger right leg; a negative sign indicates a stronger left leg. Studies 1 (N = 59) and 2 (N = 41) examined the correlation between the VJFT and other tests of lower-limb bilateral strength asymmetry in male athletes. In study 3, VJFT reliability was assessed in 60 male athletes. In study 4, the effect of rehabilitation on bilateral strength asymmetry was examined in seven male and female athletes 8-12 wk after unilateral knee surgery. In study 5, normative data were determined in 313 male soccer players.\n\n\nRESULTS\nSignificant correlations were found between VJFT and both the isokinetic leg extension test (r = 0.48; 95% confidence interval, 0.26-0.66) and the isometric leg press test (r = 0.83; 0.70-0.91). VJFT test-retest intraclass correlation coefficient was 0.91 (0.85-0.94), and typical error was 2.4%. The change in mean [-0.40% (-1.25 to 0.46%)] was not substantial. Rehabilitation decreased bilateral strength asymmetry (mean +/- SD) of the athletes recovering from unilateral knee surgery from 23 +/- 3 to 10 +/- 4% (P < 0.01). The range of normal bilateral strength asymmetry (2.5th to 97.5th percentiles) was -15 to 15%.\n\n\nCONCLUSIONS\nThe assessment of bilateral strength asymmetry with the VJFT is valid and reliable, and it may be useful in sports medicine.", "title": "" }, { "docid": "ec18c088e0068c58410bf427528aa8e4", "text": "Abnormal accounting accruals are unusually high around stock offers, especially high for firms whose offers subsequently attract lawsuits. Accruals tend to reverse after stock offers and are negatively related to post-offer stock returns. Reversals are more pronounced and stock returns are lower for sued firms than for those that are not sued. The incidence of lawsuits involving stock offers and settlement amounts are significantly positively related to abnormal accruals around the offer and significantly negatively related to post-offer stock returns. Our results support the view that some firms opportunistically manipulate earnings upward before stock issues rendering themselves vulnerable to litigation. r 2003 Elsevier B.V. All rights reserved. JEL classification: G14; G24; G32; K22; M41", "title": "" }, { "docid": "3508e1a4a4c04127792268509c1f572d", "text": "In this paper predictions of the Normalized Difference Vegetation Index (NDVI) data recorded by satellites over Ventspils Municipality in Courland, Latvia are discussed. NDVI is an important variable for vegetation forecasting and management of various problems, such as climate change monitoring, energy usage monitoring, managing the consumption of natural resources, agricultural productivity monitoring, drought monitoring and forest fire detection. Artificial Neural Networks (ANN) are computational models and universal approximators, which are widely used for nonlinear, non-stationary and dynamical process modeling and forecasting. In this paper Elman Recurrent Neural Networks (ERNN) are used to make one-step-ahead prediction of univariate NDVI time series.", "title": "" }, { "docid": "3a011bdec6531de3f0f9718f35591e52", "text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "33e1dad6c4f163c0d69bd3f58ecf9058", "text": "Resistive random access memory (RRAM) has gained significant attentions because of its excellent characteristics which are suitable for next-generation non-volatile memory applications. It is also very attractive to build neuromorphic computing chip based on RRAM cells due to non-volatile and analog properties. Neuromorphic computing hardware technologies using analog weight storage allow the scaling-up of the system size to complete cognitive tasks such as face classification much faster while consuming much lower energy. In this paper, RRAM technology development from material selection to device structure, from small array to full chip will be discussed in detail. Neuromorphic computing using RRAM devices is demonstrated, and speed & energy consumption are compared with Xeon Phi processor.", "title": "" }, { "docid": "acc960b2fd1066efce4655da837213f4", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.12.082 ⇑ Corresponding author. Tel.: +562 978 4834. E-mail addresses: [email protected] (G. Ober (J.D. Velásquez). URL: http://wi.dii.uchile.cl/ (J.D. Velásquez). Plagiarism detection is of special interest to educational institutions, and with the proliferation of digital documents on the Web the use of computational systems for such a task has become important. While traditional methods for automatic detection of plagiarism compute the similarity measures on a document-to-document basis, this is not always possible since the potential source documents are not always available. We do text mining, exploring the use of words as a linguistic feature for analyzing a document by modeling the writing style present in it. The main goal is to discover deviations in the style, looking for segments of the document that could have been written by another person. This can be considered as a classification problem using self-based information where paragraphs with significant deviations in style are treated as outliers. This so-called intrinsic plagiarism detection approach does not need comparison against possible sources at all, and our model relies only on the use of words, so it is not language specific. We demonstrate that this feature shows promise in this area, achieving reasonable results compared to benchmark models. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "1796e75d847bc06995e5e0861cd9ba9f", "text": "This paper presents a two-stage method to detect license plates in real world images. To do license plate detection (LPD), an initial set of possible license plate character regions are first obtained by the first stage classifier and then passed to the second stage classifier to reject non-character regions. 36 Adaboost classifiers (each trained with one alpha-numerical character, i.e. A..Z, 0..9) serve as the first stage classifier. In the second stage, a support vector machine (SVM) trained on scale-invariant feature transform (SIFT) descriptors obtained from training sub-windows were employed. A recall rate of 0.920792 and precision rate of 0.90185 was obtained.", "title": "" } ]
scidocsrr
66d01d6fdbcf073124f6d389b7cd724e
Quick Quiz: A Gamified Approach for Enhancing Learning
[ { "docid": "9b13beaf2e5aecc256117fdd8ccf8368", "text": "This paper examines the literature on computer games and serious games in regard to the potential positive impacts of gaming on users aged 14 years or above, especially with respect to learning, skill enhancement and engagement. Search terms identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and serious games with respect to learning and engagement and a multidimensional approach to categorizing games was developed. The findings revealed that playing computer games is linked to a range of perceptual, cognitive, behavioural, affective and motivational impacts and outcomes. The most frequently occurring outcomes and impacts were knowledge acquisition/content understanding and affective and motivational outcomes. The range of indicators and measures used in the included papers are discussed, together with methodological limitations and recommendations for further work in this area. 2012 Published by Elsevier Ltd.", "title": "" }, { "docid": "712d292b38a262a8c37679c9549a631d", "text": "Addresses for correspondence: Dr Sara de Freitas, London Knowledge Lab, Birkbeck College, University of London, 23–29 Emerald Street, London WC1N 3QS. UK. Tel: +44(0)20 7763 2117; fax: +44(0)20 7242 2754; email: [email protected]. Steve Jarvis, Vega Group PLC, 2 Falcon Way, Shire Park, Welwyn Garden City, Herts AL7 1TW, UK. Tel: +44 (0)1707 362602; Fax: +44 (0)1707 393909; email: [email protected]", "title": "" } ]
[ { "docid": "d87edfb603b5d69bcd0e0dc972d26991", "text": "The adult nervous system is not static, but instead can change, can be reshaped by experience. Such plasticity has been demonstrated from the most reductive to the most integrated levels, and understanding the bases of this plasticity is a major challenge. It is apparent that stress can alter plasticity in the nervous system, particularly in the limbic system. This paper reviews that subject, concentrating on: a) the ability of severe and/or prolonged stress to impair hippocampal-dependent explicit learning and the plasticity that underlies it; b) the ability of mild and transient stress to facilitate such plasticity; c) the ability of a range of stressors to enhance implicit fear conditioning, and to enhance the amygdaloid plasticity that underlies it.", "title": "" }, { "docid": "79c2623b0e1b51a216fffbc6bbecd9ec", "text": "Visual notations form an integral part of the language of software engineering (SE). Yet historically, SE researchers and notation designers have ignored or undervalued issues of visual representation. In evaluating and comparing notations, details of visual syntax are rarely discussed. In designing notations, the majority of effort is spent on semantics, with graphical conventions largely an afterthought. Typically, no design rationale, scientific or otherwise, is provided for visual representation choices. While SE has developed mature methods for evaluating and designing semantics, it lacks equivalent methods for visual syntax. This paper defines a set of principles for designing cognitively effective visual notations: ones that are optimized for human communication and problem solving. Together these form a design theory, called the Physics of Notations as it focuses on the physical (perceptual) properties of notations rather than their logical (semantic) properties. The principles were synthesized from theory and empirical evidence from a wide range of fields and rest on an explicit theory of how visual notations communicate. They can be used to evaluate, compare, and improve existing visual notations as well as to construct new ones. The paper identifies serious design flaws in some of the leading SE notations, together with practical suggestions for improving them. It also showcases some examples of visual notation design excellence from SE and other fields.", "title": "" }, { "docid": "4d34ba30b0ab330fcf6251490928120c", "text": "BACKGROUND\nDespite extensive data about physician burnout, to our knowledge, no national study has evaluated rates of burnout among US physicians, explored differences by specialty, or compared physicians with US workers in other fields.\n\n\nMETHODS\nWe conducted a national study of burnout in a large sample of US physicians from all specialty disciplines using the American Medical Association Physician Masterfile and surveyed a probability-based sample of the general US population for comparison. Burnout was measured using validated instruments. Satisfaction with work-life balance was explored.\n\n\nRESULTS\nOf 27 276 physicians who received an invitation to participate, 7288 (26.7%) completed surveys. When assessed using the Maslach Burnout Inventory, 45.8% of physicians reported at least 1 symptom of burnout. Substantial differences in burnout were observed by specialty, with the highest rates among physicians at the front line of care access (family medicine, general internal medicine, and emergency medicine). Compared with a probability-based sample of 3442 working US adults, physicians were more likely to have symptoms of burnout (37.9% vs 27.8%) and to be dissatisfied with work-life balance (40.2% vs 23.2%) (P < .001 for both). Highest level of education completed also related to burnout in a pooled multivariate analysis adjusted for age, sex, relationship status, and hours worked per week. Compared with high school graduates, individuals with an MD or DO degree were at increased risk for burnout (odds ratio [OR], 1.36; P < .001), whereas individuals with a bachelor's degree (OR, 0.80; P = .048), master's degree (OR, 0.71; P = .01), or professional or doctoral degree other than an MD or DO degree (OR, 0.64; P = .04) were at lower risk for burnout.\n\n\nCONCLUSIONS\nBurnout is more common among physicians than among other US workers. Physicians in specialties at the front line of care access seem to be at greatest risk.", "title": "" }, { "docid": "29bc53c2e50de52e073b7d0e304d0f5f", "text": "UNLABELLED\nA theory is presented that attempts to answer two questions. What visual contents can an observer consciously access at one moment?\n\n\nANSWER\nonly one feature value (e.g., green) per dimension, but those feature values can be associated (as a group) with multiple spatially precise locations (comprising a single labeled Boolean map). How can an observer voluntarily select what to access?\n\n\nANSWER\nin one of two ways: (a) by selecting one feature value in one dimension (e.g., selecting the color red) or (b) by iteratively combining the output of (a) with a preexisting Boolean map via the Boolean operations of intersection and union. Boolean map theory offers a unified interpretation of a wide variety of visual attention phenomena usually treated in separate literatures. In so doing, it also illuminates the neglected phenomena of attention to structure.", "title": "" }, { "docid": "55d10e35e0a54859b20e5c8e9c9d8ef4", "text": "Course allocation is one of the most complex issues facing any university, due to the sensitive nature of deciding which subset of students should be granted seats in highly-popular (market-scarce) courses. In recent years, researchers have proposed numerous solutions, using techniques in integer programming, combinatorial auction design, and matching theory. In this paper, we present a four-part AI-based course allocation algorithm that was conceived by an undergraduate student, and recently implemented at a small Canadian liberal arts university. This new allocation process, which builds upon the Harvard Business School Draft, has received overwhelming support from students and faculty for its transparency, impartiality, and effectiveness.", "title": "" }, { "docid": "43c4dd05f438adf91a62f42f1f7d5abc", "text": "We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly.", "title": "" }, { "docid": "1cc962ab0d15a47725858ed5ff5872f6", "text": "Although spontaneous remyelination does occur in multiple sclerosis lesions, its extent within the global population with this disease is presently unknown. We have systematically analysed the incidence and distribution of completely remyelinated lesions (so-called shadow plaques) or partially remyelinated lesions (shadow plaque areas) in 51 autopsies of patients with different clinical courses and disease durations. The extent of remyelination was variable between cases. In 20% of the patients, the extent of remyelination was extensive with 60-96% of the global lesion area remyelinated. Extensive remyelination was found not only in patients with relapsing multiple sclerosis, but also in a subset of patients with progressive disease. Older age at death and longer disease duration were associated with significantly more remyelinated lesions or lesion areas. No correlation was found between the extent of remyelination and either gender or age at disease onset. These results suggest that the variable and patient-dependent extent of remyelination must be considered in the design of future clinical trials aimed at promoting CNS repair.", "title": "" }, { "docid": "c76fc0f9ce4422bee1d2cf3964f1024c", "text": "The subjective nature of gender inequality motivates the analysis and comparison of data from real and fictional human interaction. We present a computational extension of the Bechdel test: A popular tool to assess if a movie contains a male gender bias, by looking for two female characters who discuss about something besides a man. We provide the tools to quantify Bechdel scores for both genders, and we measure them in movie scripts and large datasets of dialogues between users of MySpace and Twitter. Comparing movies and users of social media, we find that movies and Twitter conversations have a consistent male bias, which does not appear when analyzing MySpace. Furthermore, the narrative of Twitter is closer to the movies that do not pass the Bechdel test than to", "title": "" }, { "docid": "fe1d0321b1182c9ecb92ccd95c83cd25", "text": "Cybercriminals have leveraged the popularity of a large user base available on Online Social Networks (OSNs) to spread spam campaigns by propagating phishing URLs, attaching malicious contents, etc. However, another kind of spam attacks using phone numbers has recently become prevalent on OSNs, where spammers advertise phone numbers to attract users’ attention and convince them to make a call to these phone numbers. The dynamics of phone number based spam is different from URL-based spam due to an inherent trust associated with a phone number. While previous work has proposed strategies to mitigate URL-based spam attacks, phone number based spam attacks have received less attention. In this paper, we aim to detect spammers that use phone numbers to promote campaigns on Twitter. To this end, we collected information (tweets, user meta-data, etc.) about 3, 370 campaigns spread by 670, 251 users. We model the Twitter dataset as a heterogeneous network by leveraging various interconnections between different types of nodes present in the dataset. In particular, we make the following contributions – (i) We propose a simple yet effective metric, called Hierarchical Meta-Path Score (HMPS) to measure the proximity of an unknown user to the other known pool of spammers. (ii) We design a feedback-based active learning strategy and show that it significantly outperforms three state-of-the-art baselines for the task of spam detection. Our method achieves 6.9% and 67.3% higher F1-score and AUC, respectively compared to the best baseline method. (iii) To overcome the problem of less training instances for supervised learning, we show that our proposed feedback strategy achieves 25.6% and 46% higher F1-score and AUC respectively than other oversampling strategies. Finally, we perform a case study to show how our method is capable of detecting those users as spammers who have not been suspended by Twitter (and other baselines) yet.", "title": "" }, { "docid": "f2205324dbf3a828e695854402ebbafe", "text": "Current research in law and neuroscience is promising to answer these questions with a \"yes.\" Some legal scholars working in this area claim that we are close to realizing the \"early criminologists' dream of identifying the biological roots of criminality.\" These hopes for a neuroscientific transformation of the criminal law, although based in the newest research, are part of a very old story. Criminal law and neuroscience have been engaged in an ill-fated and sometimes tragic affair for over two hundred years. Three issues have recurred that track those that bedeviled earlier efforts to ground criminal law in brain sciences. First is the claim that the brain is often the most relevant or fundamental level at which to understand criminal conduct. Second is that the various phenomena we call \"criminal violence\" arise causally from dysfunction within specific locations in the brain (\"localization\"). Third is the related claim that, because much violent criminality arises from brain dysfunction, people who commit such acts are biologically different from typical people (\"alterity\" or \"otherizing\").", "title": "" }, { "docid": "e640c691a45a5435dcdb7601fb581280", "text": "We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task involves matching a response candidate with a conversation context, the challenges for which include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. This motivates us to propose a new matching framework that can sufficiently carry important information in contexts to matching and model relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interact with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) that models relationships among utterances. Context-response matching is then calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experiment results show that both models can significantly outperform state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage important information in contexts for matching.", "title": "" }, { "docid": "a205d93fb0ce6dfc24a4367dd3461055", "text": "Smart devices are gaining popularity in our homes with the promise to make our lives easier and more comfortable. However, the increased deployment of such smart devices brings an increase in potential security risks. In this work, we propose an intrusion detection and mitigation framework, called IoT-IDM, to provide a network-level protection for smart devices deployed in home environments. IoT-IDM monitors the network activities of intended smart devices within the home and investigates whether there is any suspicious or malicious activity. Once an intrusion is detected, it is also capable of blocking the intruder in accessing the victim device on the fly. The modular design of IoT-IDM gives its users the flexibility to employ customized machine learning techniques for detection based on learned signature patterns of known attacks. Software-defined networking technology and its enabling communication protocol, OpenFlow, are used to realise this framework. Finally, a prototype of IoT-IDM is developed and the applicability and efficiency of proposed framework demonstrated through a real IoT device: a smart light bulb.", "title": "" }, { "docid": "d972e23eb49c15488d2159a9137efb07", "text": "One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.", "title": "" }, { "docid": "d7ea5e0bdf811f427b7c283d4aae7371", "text": "This work investigates the development of students’ computational thinking (CT) skills in the context of educational robotics (ER) learning activity. The study employs an appropriate CT model for operationalising and exploring students’ CT skills development in two different age groups (15 and 18 years old) and across gender. 164 students of different education levels (Junior high: 89; High vocational: 75) engaged in ER learning activities (2 hours per week, 11 weeks totally) and their CT skills were evaluated at different phases during the activity, using different modality (written and oral) assessment tools. The results suggest that: (a) students reach eventually the same level of CT skills development independent of their age and gender, (b) CT skills inmost cases need time to fully develop (students’ scores improve significantly towards the end of the activity), (c) age and gender relevant differences appear when analysing students’ score in the various specific dimensions of the CT skills model, (d) the modality of the skill assessment instrumentmay have an impact on students’ performance, (e) girls appear inmany situations to need more training time to reach the same skill level compared to boys. © 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "a961b8851761575ae9b54684c58aa30d", "text": "We propose an optical wireless indoor localization using light emitting diodes (LEDs) and demonstrate it via simulation. Unique frequency addresses are assigned to each LED lamp, and transmitted through the light radiated by the LED. Using the phase difference, time difference of arrival (TDOA) localization algorithm is employed. Because the proposed localization method used pre-installed LED ceiling lamps, no additional infrastructure for localization is required to install and therefore, inexpensive system can be realized. The performance of the proposed localization method is evaluated by computer simulation, and the indoor location accuracy is less than 1 cm in the space of 5m x 5 m x 3 m.", "title": "" }, { "docid": "dc9d26442f454685d8bb92deb17b4a23", "text": "Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry, and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. A summary of the past Computer Vision Summer Schools can be found at: http:// www.dmi.unict.it/icvss This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview of challenging areas with key references to the existing literature.", "title": "" }, { "docid": "fe1882df52ed6555a087f7683efe80d1", "text": "Enforcing security on various implementations of OAuth in Android apps should consider a wide range of issues comprehensively. OAuth implementations in Android apps differ from the recommended specification due to the provider and platform factors, and the varied implementations often become vulnerable. Current vulnerability assessments on these OAuth implementations are ad hoc and lack a systematic manner. As a result, insecure OAuth implementations are still widely used and the situation is far from optimistic in many mobile app ecosystems.\n To address this problem, we propose a systematic vulnerability assessment framework for OAuth implementations on Android platform. Different from traditional OAuth security analyses that are experiential with a restrictive three-party model, our proposed framework utilizes an systematic security assessing methodology that adopts a five-party, three-stage model to detect typical vulnerabilities of popular OAuth implementations in Android apps. Based on this framework, a comprehensive investigation on vulnerable OAuth implementations is conducted at the level of an entire mobile app ecosystem. The investigation studies the Chinese mainland mobile app markets (e.g., Baidu App Store, Tencent, Anzhi) that covers 15 mainstream OAuth service providers. Top 100 relevant relying party apps (RP apps) are thoroughly assessed to detect vulnerable OAuth implementations, and we further perform an empirical study of over 4,000 apps to validate how frequently developers misuse the OAuth protocol. The results demonstrate that 86.2% of the apps incorporating OAuth services are vulnerable, and this ratio of Chinese mainland Android app market is much higher than that (58.7%) of Google Play.", "title": "" }, { "docid": "f7a8116cefaaf6ab82118885efac4c44", "text": "Entrepreneurs have created a number of new Internet-based platforms that enable owners to rent out their durable goods when not using them for personal consumption. We develop a model of these kinds of markets in order to analyze the determinants of ownership, rental rates, quantities, and the surplus generated in these markets. Our analysis considers both a short run, before consumers can revise their ownership decisions and a long run, in which they can. This allows us to explore how patterns of ownership and consumption might change as a result of these new markets. We also examine the impact of bringing-to-market costs, such as depreciation, labor costs and transaction costs and consider the platform’s pricing problem. An online survey of consumers broadly supports the modeling assumptions employed. For example, ownership is determined by individuals’ forward-looking assessments of planned usage. Factors enabling sharing markets to flourish are explored. JEL L1, D23, D47", "title": "" }, { "docid": "e769b09a593b68e7d47102046efc6d8d", "text": "BACKGROUND\nExisting research indicates sleep problems to be prevalent in youth with internalizing disorders. However, childhood sleep problems are common in the general population and few data are available examining unique relationships between sleep, specific types of anxiety and depressive symptoms among non-clinical samples of children and adolescents.\n\n\nMETHODS\nThe presence of sleep problems was examined among a community sample of children and adolescents (N=175) in association with anxiety and depressive symptoms, age, and gender. Based on emerging findings from the adult literature we also examined associations between cognitive biases and sleep problems.\n\n\nRESULTS\nOverall findings revealed significant associations between sleep problems and both anxiety and depressive symptoms, though results varied by age. Depressive symptoms showed a greater association with sleep problems among adolescents, while anxiety symptoms were generally associated with sleep problems in all youth. Cognitive factors (cognitive errors and control beliefs) linked with anxiety and depression also were associated with sleep problems among adolescents, though these correlations were no longer significant after controlling for internalizing symptoms.\n\n\nCONCLUSIONS\nResults are discussed in terms of their implications for research and treatment of sleep and internalizing disorders in youth.", "title": "" }, { "docid": "de5fd8ae40a2d078101d5bb1859f689b", "text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.", "title": "" } ]
scidocsrr
320602ca1316ead9c415a280e54f12ca
The roles of psychological climate, information management capabilities, and IT support on knowledge-sharing: an MOA perspective
[ { "docid": "cd0b28b896dd84ca70d42541b466d5ff", "text": "a r t i c l e i n f o a b s t r a c t The success of knowledge management initiatives depends on knowledge sharing. This paper reviews qualitative and quantitative studies of individual-level knowledge sharing. Based on the literature review we developed a framework for understanding knowledge sharing research. The framework identifies five areas of emphasis of knowledge sharing research: organizational context, interpersonal and team characteristics, cultural characteristics, individual characteristics, and motivational factors. For each emphasis area the paper discusses the theoretical frameworks used and summarizes the empirical research results. The paper concludes with a discussion of emerging issues, new research directions, and practical implications of knowledge sharing research. Knowledge is a critical organizational resource that provides a sustainable competitive advantage in a competitive and dynamic economy (e. To gain a competitive advantage it is necessary but insufficient for organizations to rely on staffing and training systems that focus on selecting employees who have specific knowledge, skills, abilities, or competencies or helping employees acquire them (e.g., Brown & Duguid, 1991). Organizations must also consider how to transfer expertise and knowledge from experts who have it to novices who need to know (Hinds, Patterson, & Pfeffer, 2001). That is, organizations need to emphasize and more effectively exploit knowledge-based resources that already exist within the organization As one knowledge-centered activity, knowledge sharing is the fundamental means through which employees can contribute to knowledge application, innovation, and ultimately the competitive advantage of the organization (Jackson, Chuang, Harden, Jiang, & Joseph, 2006). Knowledge sharing between employees and within and across teams allows organizations to exploit and capitalize on knowledge-based resources Research has shown that knowledge sharing and combination is positively related to reductions in production costs, faster completion of new product development projects, team performance, firm innovation capabilities, and firm performance including sales growth and revenue from new products and services (e. Because of the potential benefits that can be realized from knowledge sharing, many organizations have invested considerable time and money into knowledge management (KM) initiatives including the development of knowledge management systems (KMS) which use state-of-the-art technology to facilitate the collection, storage, and distribution of knowledge. However, despite these investments it has been estimated that at least $31.5 billion are lost per year by Fortune 500", "title": "" }, { "docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb", "text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.", "title": "" } ]
[ { "docid": "9a4fc12448d166f3a292bfdf6977745d", "text": "Enabled by the rapid development of virtual reality hardware and software, 360-degree video content has proliferated. From the network perspective, 360-degree video transmission imposes significant challenges because it consumes 4 6χ the bandwidth of a regular video with the same resolution. To address these challenges, in this paper, we propose a motion-prediction-based transmission mechanism that matches network video transmission to viewer needs. Ideally, if viewer motion is perfectly known in advance, we could reduce bandwidth consumption by 80%. Practically, however, to guarantee the quality of viewing experience, we have to address the random nature of viewer motion. Based on our experimental study of viewer motion (comprising 16 video clips and over 150 subjects), we found the viewer motion can be well predicted in 100∼500ms. We propose a machine learning mechanism that predicts not only viewer motion but also prediction deviation itself. The latter is important because it provides valuable input on the amount of redundancy to be transmitted. Based on such predictions, we propose a targeted transmission mechanism that minimizes overall bandwidth consumption while providing probabilistic performance guarantees. Real-data-based evaluations show that the proposed scheme significantly reduces bandwidth consumption while minimizing performance degradation, typically a 45% bandwidth reduction with less than 0.1% failure ratio.", "title": "" }, { "docid": "56525ce9536c3c8ea03ab6852b854e95", "text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.", "title": "" }, { "docid": "c0d203ef23df86f5a3e9f970dfb1d152", "text": "We propose a deep learning framework for few-shot image classification, which exploits information across label semantics and image domains, so that regions of interest can be properly attended for improved classification. The proposed semantics-guided attention module is able to focus on most relevant regions in an image, while the attended image samples allow data augmentation and alleviate possible overfitting during FSL training. Promising performances are presented in our experiments, in which we consider both closed and open-world settings. The former considers the test input belong to the categories of few shots only, while the latter requires recognition of all categories of interest.", "title": "" }, { "docid": "7f6e966f3f924e18cb3be0ae618309e6", "text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)", "title": "" }, { "docid": "1cc4048067cc93c2f1e836c77c2e06dc", "text": "Recent advances in microscope automation provide new opportunities for high-throughput cell biology, such as image-based screening. High-complex image analysis tasks often make the implementation of static and predefined processing rules a cumbersome effort. Machine-learning methods, instead, seek to use intrinsic data structure, as well as the expert annotations of biologists to infer models that can be used to solve versatile data analysis tasks. Here, we explain how machine-learning methods work and what needs to be considered for their successful application in cell biology. We outline how microscopy images can be converted into a data representation suitable for machine learning, and then introduce various state-of-the-art machine-learning algorithms, highlighting recent applications in image-based screening. Our Commentary aims to provide the biologist with a guide to the application of machine learning to microscopy assays and we therefore include extensive discussion on how to optimize experimental workflow as well as the data analysis pipeline.", "title": "" }, { "docid": "200cc9aec15d866796ca0555ac730639", "text": "Influential users have great potential for accelerating information dissemination and acquisition on Twitter. How to measure the influence of Twitter users has attracted significant academic and industrial attention. Existing influence measurement techniques are vulnerable to sybil users that are thriving on Twitter. Although sybil defenses for online social networks have been extensively investigated, they commonly assume unique mappings from human-established trust relationships to online social associations and thus do not apply to Twitter where users can freely follow each other. This paper presents TrueTop, the first sybil-resilient system to measure the influence of Twitter users. TrueTop is rooted in two observations from real Twitter datasets. First, although non-sybil users may incautiously follow strangers, they tend to be more careful and selective in retweeting, replying to, and mentioning other users. Second, influential users usually get much more retweets, replies, and mentions than non-influential users. Detailed theoretical studies and synthetic simulations show that TrueTop can generate very accurate influence measurement results with strong resilience to sybil attacks.", "title": "" }, { "docid": "073cd7c54b038dcf69ae400f97a54337", "text": "Interventions to support children with autism often include the use of visual supports, which are cognitive tools to enable learning and the production of language. Although visual supports are effective in helping to diminish many of the challenges of autism, they are difficult and time-consuming to create, distribute, and use. In this paper, we present the results of a qualitative study focused on uncovering design guidelines for interactive visual supports that would address the many challenges inherent to current tools and practices. We present three prototype systems that address these design challenges with the use of large group displays, mobile personal devices, and personal recording technologies. We also describe the interventions associated with these prototypes along with the results from two focus group discussions around the interventions. We present further design guidance for visual supports and discuss tensions inherent to their design.", "title": "" }, { "docid": "d0ddc8f2efbdd7d7b6ffda32e2726d87", "text": "Violence in video games has come under increasing research attention over the past decade. Researchers in this area have suggested that violent video games may cause aggressive behavior among players. However, the state of the extant literature has not yet been examined for publication bias. The current meta-analysis is designed to correct for this oversight. Results indicated that publication bias does exist for experimental studies of aggressive behavior, as well as for non-experimental studies of aggressive behavior and aggressive thoughts. Research in other areas, including prosocial behavior and experimental studies of aggressive thoughts were less susceptible to publication bias. Moderator effects results also suggested that studies employing less standardized and reliable measures of aggression tended to produce larger effect sizes. Suggestions for future violent video game studies are provided. © 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6ce6edfa4f8a29ea1f776a3524224e55", "text": "In this paper we present a fuzzy self-tuning algorithm to select the Proportional, Integral and Derivative gains of a PID controller according to the actual state of a robotic manipulator. The stability via Lyapunov theory for the closed loop control system is also analyzed and shown that is locally asymptotically stable for a class of gain matrices depending on the manipulator states. This feature increases the potential of the PID control scheme to handle practical constraints in actual robots such as presence of actuators with limited torque capabilities. Experimental results on a two degrees of freedom robot arm shown the usefulness of the proposed approach.", "title": "" }, { "docid": "86aa784b175391554a0fcae1fb701b0b", "text": "Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.", "title": "" }, { "docid": "83e16c6a186d04b4de71ce8cec872b05", "text": "In this paper, we propose a unified framework to analyze the performance of dense small cell networks (SCNs) in terms of the coverage probability and the area spectral efficiency (ASE). In our analysis, we consider a practical path loss model that accounts for both non-line-of-sight (NLOS) and line-of-sight (LOS) transmissions. Furthermore, we adopt a generalized fading model, in which Rayleigh fading, Rician fading and Nakagami-m fading can be treated in a unified framework. The analytical results of the coverage probability and the ASE are derived, using a generalized stochastic geometry analysis. Different from existing work that does not differentiate NLOS and LOS transmissions, our results show that NLOS and LOS transmissions have a significant impact on the coverage probability and the ASE performance, particularly when the SCNs grow dense. Furthermore, our results establish for the first time that the performance of the SCNs can be divided into four regimes, according to the intensity (aka density) of BSs, where in each regime the performance is dominated by different factors.", "title": "" }, { "docid": "96bc6ffcc299e7b2221dbb8e2c4349dd", "text": "At millimeter wave (mmW) frequencies, beamforming and large antenna arrays are an essential requirement to combat the high path loss for mmW communication. Moreover, at these frequencies, very large bandwidths are available t o fulfill the data rate requirements of future wireless networks. However, utilization of these large bandwidths and of large antenna a rrays can result in a high power consumption which is an even bigger concern for mmW receiver design. In a mmW receiver, the analog-to-digital converter (ADC) is generally considered as the most power consuming block. In this paper, primarily focusing on the ADC power, we analyze and compare the total power consumption of the complete analog chain for Analog, Digita l and Hybrid beamforming (ABF, DBF and HBF) based receiver design. We show how power consumption of these beamforming schemes varies with a change in the number of antennas, the number of ADC bits (b) and the bandwidth (B). Moreover, we compare low power (as in [1]) and high power (as in [2]) ADC models, and show that for a certain range of number of antenna s, b and B, DBF may actually have a comparable and lower power consumption than ABF and HBF, respectively. In addition, we also show how the choice of an appropriate beamforming schem e depends on the signal-to-noise ratio regime.", "title": "" }, { "docid": "f06ef5251f1342bfdb35be30e6b49437", "text": "Accurate activity recognition enables the development of a variety of ubiquitous computing applications, such as context-aware systems, lifelogging, and personal health systems. Wearable sensing technologies can be used to gather data for activity recognition without requiring sensors to be installed in the infrastructure. However, the user may need to wear multiple sensors for accurate recognition of a larger number of different activities. We developed a wearable acoustic sensor, called BodyScope, to record the sounds produced in the user's throat area and classify them into user activities, such as eating, drinking, speaking, laughing, and coughing. The F-measure of the Support Vector Machine classification of 12 activities using only our BodyScope sensor was 79.5%. We also conducted a small-scale in-the-wild study, and found that BodyScope was able to identify four activities (eating, drinking, speaking, and laughing) at 71.5% accuracy.", "title": "" }, { "docid": "dcf231b887d7caeec341850507561197", "text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.", "title": "" }, { "docid": "379361a68388bda81375f7fb543689cc", "text": "A novel circular aperture antenna is presented. The proposed nonprotruding antenna is composed of a cylindrical cavity embedding an inverted hat antenna whose profile is defined by three elliptical segments. This profile can be used to control and shape the return loss response over a wide frequency band (31% at -15 dB). The present design combines the benefits of a low-profile antenna and the broad bandwidth of the inverted hat monopole. A parametric analysis of the antenna is conducted. Omnidirectional monopole-like radiation pattern and vertical polarization are also verified with measurements and simulations. This antenna constitutes a low drag candidate for a distance measurement equipment (DME) aircraft navigation system.", "title": "" }, { "docid": "09cffaca68a254f591187776e911d36e", "text": "Signaling across cellular membranes, the 826 human G protein-coupled receptors (GPCRs) govern a wide range of vital physiological processes, making GPCRs prominent drug targets. X-ray crystallography provided GPCR molecular architectures, which also revealed the need for additional structural dynamics data to support drug development. Here, nuclear magnetic resonance (NMR) spectroscopy with the wild-type-like A2A adenosine receptor (A2AAR) in solution provides a comprehensive characterization of signaling-related structural dynamics. All six tryptophan indole and eight glycine backbone 15N-1H NMR signals in A2AAR were individually assigned. These NMR probes provided insight into the role of Asp522.50 as an allosteric link between the orthosteric drug binding site and the intracellular signaling surface, revealing strong interactions with the toggle switch Trp 2466.48, and delineated the structural response to variable efficacy of bound drugs across A2AAR. The present data support GPCR signaling based on dynamic interactions between two semi-independent subdomains connected by an allosteric switch at Asp522.50.", "title": "" }, { "docid": "7b104b14b4219ecc2d1d141fbf0e707b", "text": "As hospitals throughout Europe are striving exploit advantages of IT and network technologies, electronic medical records systems are starting to replace paper based archives. This paper suggests and describes an add-on service to electronic medical record systems that will help regular patients in getting insight to their diagnoses and medical record. The add-on service is based annotating polysemous and foreign terms with WordNet synsets. By exploiting the way that relationships between synsets are structured and described in WordNet, it is shown how patients can get interactive opportunities to generalize and understand their personal records.", "title": "" }, { "docid": "f6333ab767879cf1673bb50aeeb32533", "text": "Github facilitates the pull-request mechanism as an outstanding social coding paradigm by integrating with social media. The review process of pull-requests is a typical crowd sourcing job which needs to solicit opinions of the community. Recommending appropriate reviewers can reduce the time between the submission of a pull-request and the actual review of it. In this paper, we firstly extend the traditional Machine Learning (ML) based approach of bug triaging to reviewer recommendation. Furthermore, we analyze social relations between contributors and reviewers, and propose a novel approach to recommend highly relevant reviewers by mining comment networks (CN) of given projects. Finally, we demonstrate the effectiveness of these two approaches with quantitative evaluations. The results show that CN-based approach achieves a significant improvement over the ML-based approach, and on average it reaches a precision of 78% and 67% for top-1 and top-2 recommendation respectively, and a recall of 77% for top-10 recommendation.", "title": "" }, { "docid": "082e747ab9f93771a71e2b6147d253b2", "text": "Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals’ locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74% of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.", "title": "" } ]
scidocsrr
4a19e25699a909235a4e1dbe84e4efd4
Anorexia on Tumblr: A Characterization Study
[ { "docid": "96a79bc015e34db18e32a31bfaaace36", "text": "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71%, using observations about their prenatal behavior, and as accurately as 80-83% when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.", "title": "" }, { "docid": "e9d987351816570b29d0144a6a7bd2ae", "text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.", "title": "" } ]
[ { "docid": "71a9394d995cefb8027bed3c56ec176c", "text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%", "title": "" }, { "docid": "42fd4018cbfb098ef8e3957b1cee38f0", "text": "We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a ”strict” Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem.", "title": "" }, { "docid": "889c8754c97db758b474a6f140b39911", "text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.", "title": "" }, { "docid": "443a4fe9e7484a18aa53a4b142d93956", "text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.", "title": "" }, { "docid": "a7cdfc27dbc704140ef5b3199469898f", "text": "This technical report updates the 2004 American Academy of Pediatrics technical report on the legalization of marijuana. Current epidemiology of marijuana use is presented, as are definitions and biology of marijuana compounds, side effects of marijuana use, and effects of use on adolescent brain development. Issues concerning medical marijuana specifically are also addressed. Concerning legalization of marijuana, 4 different approaches in the United States are discussed: legalization of marijuana solely for medical purposes, decriminalization of recreational use of marijuana, legalization of recreational use of marijuana, and criminal prosecution of recreational (and medical) use of marijuana. These approaches are compared, and the latest available data are presented to aid in forming public policy. The effects on youth of criminal penalties for marijuana use and possession are also addressed, as are the effects or potential effects of the other 3 policy approaches on adolescent marijuana use. Recommendations are included in the accompanying policy statement.", "title": "" }, { "docid": "bb6f5899c4f1c652e30945c49ce4a2d0", "text": "This paper reports the piezoelectric properties of ScAlN thin films. We evaluated the piezoelectric coefficients d<sub>33</sub> and d<sub>31</sub> of Sc<sub>x</sub>Al<sub>1-x</sub>N thin films directly deposited onto silicon wafers, as well the radio frequency (RF) electrical characteristics of Sc<sub>0.35</sub>Al<sub>0.65</sub>N bulk acoustic wave (BAW) resonators at around 2 GHz, and found a maximum value for d<sub>33</sub> of 28 pC/N and a maximum -d<sub>31</sub> of 13 pm/V at 40% scandium concentration. In BAW resonators that use Sc<sub>0.35</sub>Al<sub>0.65</sub>N as a piezoelectric film, the electromechanical coupling coefficient k<sup>2</sup> (=15.5%) was found to be 2.6 times that of resonators with AlN films. These experimental results are in very close agreement with first-principles calculations. The large electromechanical coupling coefficient and high sound velocity of these films should make them suitable for high frequency applications.", "title": "" }, { "docid": "acd5879d3d2746e4c6036691e4099f7a", "text": "Alkamides are fatty acid amides of wide distribution in plants, structurally related to N-acyl-L-homoserine lactones (AHLs) from Gram-negative bacteria and to N- acylethanolamines (NAEs) from plants and mammals. Global analysis of gene expression changes in Arabidopsis thaliana in response to N-isobutyl decanamide, the most highly active alkamide identified to date, revealed an overrepresentation of defense-responsive transcriptional networks. In particular, genes encoding enzymes for jasmonic acid (JA) biosynthesis increased their expression, which occurred in parallel with JA, nitric oxide (NO) and H₂O₂ accumulation. The activity of the alkamide to confer resistance against the necrotizing fungus Botrytis cinerea was tested by inoculating Arabidopsis detached leaves with conidiospores and evaluating disease symptoms and fungal proliferation. N-isobutyl decanamide application significantly reduced necrosis caused by the pathogen and inhibited fungal proliferation. Arabidopsis mutants jar1 and coi1 altered in JA signaling and a MAP kinase mutant (mpk6), unlike salicylic acid- (SA) related mutant eds16/sid2-1, were unable to defend from fungal attack even when N-isobutyl decanamide was supplied, indicating that alkamides could modulate some necrotrophic-associated defense responses through JA-dependent and MPK6-regulated signaling pathways. Our results suggest a role of alkamides in plant immunity induction.", "title": "" }, { "docid": "41a4c88cb1446603f43a4888b6c13f61", "text": "This paper gives an overview of the ArchWare European Project1. The broad scope of ArchWare is to respond to the ever-present demand for software systems that are capable of accommodating change over their lifetime, and therefore are evolvable. In order to achieve this goal, ArchWare develops an integrated set of architecture-centric languages and tools for the modeldriven engineering of evolvable software systems based on a persistent run-time framework. The ArchWare Integrated Development Environment comprises: (a) innovative formal architecture description, analysis, and refinement languages for describing the architecture of evolvable software systems, verifying their properties and expressing their refinements; (b) tools to support architecture description, analysis, and refinement as well as code generation; (c) enactable processes for supporting model-driven software engineering; (d) a persistent run-time framework including a virtual machine for process enactment. It has been developed using ArchWare itself and is available as Open Source Software.", "title": "" }, { "docid": "455e3f0c6f755d78ecafcdff14c46014", "text": "BACKGROUND\nIn neonatal and early childhood surgeries such as meningomyelocele repairs, closing deep wounds and oncological treatment, tensor fasciae lata (TFL) flaps are used. However, there are not enough data about structural properties of TFL in foetuses, which can be considered as the closest to neonates in terms of sampling. This study's main objective is to gather data about morphological structures of TFL in human foetuses to be used in newborn surgery.\n\n\nMATERIALS AND METHODS\nFifty formalin-fixed foetuses (24 male, 26 female) with gestational age ranging from 18 to 30 weeks (mean 22.94 ± 3.23 weeks) were included in the study. TFL samples were obtained by bilateral dissection and then surface area, width and length parameters were recorded. Digital callipers were used for length and width measurements whereas surface area was calculated using digital image analysis software.\n\n\nRESULTS\nNo statistically significant differences were found in terms of numerical value of parameters between sides and sexes (p > 0.05). Linear functions for TFL surface area, width, anterior and posterior margin lengths were calculated as y = -225.652 + 14.417 × age (weeks), y = -5.571 + 0.595 × age (weeks), y = -4.276 + 0.909 × age (weeks), and y = -4.468 + 0.779 × age (weeks), respectively.\n\n\nCONCLUSIONS\nLinear functions for TFL surface area, width and lengths can be used in designing TFL flap dimensions in newborn surgery. In addition, using those described linear functions can also be beneficial in prediction of TFL flap dimensions in autopsy studies.", "title": "" }, { "docid": "fdbae668610803991b359702fbd8d430", "text": "The progress of the social science disciplines depends on conducting relevant research. However, research methodology adopted and choices made during the course of the research project are underpinned by varying ontological, epistemological and axiological positions that may be known or unknown to the researcher. This paper sought to critically explore the philosophical underpinnings of the social science research. It was suggested that a “multiversal” ontological position, positivist-hermeneutic epistemological position and value-laden axiological position should be adopted for social science research by non-western scholars as alternative to the dominant naïve realist, positivist, and value-free orientation. Against the backdrop of producing context-relevant knowledge, non-western scholars are encouraged to re-examine their philosophical positions in the conduct of social science research.", "title": "" }, { "docid": "7b7cb898d6d7f4383489f390a3479b8a", "text": "Although the Evolved Packet System Authentication and Key Agreement (EPS-AKA) provides security and privacy enhancements in 3rd Generation Partnership Project (3GPP), the International Mobile Subscriber Identity (IMSI) is sent in clear text in order to obtain service. Various efforts to provide security mechanisms to protect this unique private identity have not resulted in methods implemented to protect the disclosure of the IMSI. The exposure of the IMSI brings risk to user privacy, and knowledge of it can lead to several passive and active attacks targeted at specific IMSI's and their respective users. Further, the Temporary Mobile Subscribers Identity (TMSI) generated by the Authentication Center (AuC) have been found to be prone to rainbow and brute force attacks, hence an attacker who gets hold of the TMSI can be able to perform social engineering in tracing the TMSI to the corresponding IMSI of a User Equipment (UE). This paper proposes a change to the EPS-AKA authentication process in 4G Long Term Evolution (LTE) Network by including the use of Public Key Infrastructure (PKI). The change would result in the IMSI never being released in the clear in an untrusted network.", "title": "" }, { "docid": "977a1d6be20dd790e78bd47c8d8d7422", "text": "Conformation, genetics, and behavioral drive are the major determinants of success in canine athletes, although controllable variables, such as training and nutrition, play an important role. The scope and breadth of canine athletic events has expanded dramatically in the past 30 years, but with limited research on performance nutrition. There are considerable data examining nutritional physiology in endurance dogs and in sprinting dogs; however, nutritional studies for agility, field trial, and detection are rare. This article highlights basic nutritional physiology and interventions for exercise, and reviews newer investigations regarding aging working and service dogs, and canine detection activities.", "title": "" }, { "docid": "42043ee6577d791874c1aa34baf81e64", "text": "Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called leveraging bagging. This method combines the simplicity of bagging with adding more randomization to the input, and output of the classifiers. We test our method by performing an evaluation study on synthetic and real-world datasets comprising up to ten million examples.", "title": "" }, { "docid": "d4cd0dabcf4caa22ad92fab40844c786", "text": "NA", "title": "" }, { "docid": "e3b3e4e75580f3dad0f2fb2b9e28fff4", "text": "The present study introduced an integrated method for the production of biodiesel from microalgal oil. Heterotrophic growth of Chlorella protothecoides resulted in the accumulation of high lipid content (55%) in cells. Large amount of microalgal oil was efficiently extracted from these heterotrophic cells by using n-hexane. Biodiesel comparable to conventional diesel was obtained from heterotrophic microalgal oil by acidic transesterification. The best process combination was 100% catalyst quantity (based on oil weight) with 56:1 molar ratio of methanol to oil at temperature of 30 degrees C, which reduced product specific gravity from an initial value of 0.912 to a final value of 0.8637 in about 4h of reaction time. The results suggested that the new process, which combined bioengineering and transesterification, was a feasible and effective method for the production of high quality biodiesel from microalgal oil.", "title": "" }, { "docid": "8745e21073db143341e376bad1f0afd7", "text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR", "title": "" }, { "docid": "8a0c295e620b68c07005d6d96d4acbe9", "text": "One method of viral marketing involves seeding certain consumers within a population to encourage faster adoption of the product throughout the entire population. However, determining how many and which consumers within a particular social network should be seeded to maximize adoption is challenging. We define a strategy space for consumer seeding by weighting a combination of network characteristics such as average path length, clustering coefficient, and degree. We measure strategy effectiveness by simulating adoption on a Bass-like agent-based model, with five different social network structures: four classic theoretical models (random, lattice, small-world, and preferential attachment) and one empirical (extracted from Twitter friendship data). To discover good seeding strategies, we have developed a new tool, called BehaviorSearch, which uses genetic algorithms to search through the parameter-space of agent-based models. This evolutionary search also provides insight into the interaction between strategies and network structure. Our results show that one simple strategy (ranking by node degree) is near-optimal for the four theoretical networks, but that a more nuanced strategy performs significantly better on the empirical Twitter-based network. We also find a correlation between the optimal seeding budget for a network, and the inequality of the degree distribution.", "title": "" }, { "docid": "bb2de14849800861d99b40cb8bfba562", "text": "In this paper, the problem of time series prediction is studied. A Bayesian procedure based on Gaussian process models using a nonstationary covariance function is proposed. Experiments proved the approach e4ectiveness with an excellent prediction and a good tracking. The conceptual simplicity, and good performance of Gaussian process models should make them very attractive for a wide range of problems. c © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "1e3585a27b6373685544dc392140a4fb", "text": "When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.", "title": "" }, { "docid": "cf54c485a54d9b22d06710684061eac2", "text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.", "title": "" } ]
scidocsrr
fa0af2feb6dd57a7698470f706bcbe74
Supply Networks as Complex Systems: A Network-Science-Based Characterization
[ { "docid": "bf5f08174c55ed69e454a87ff7fbe6e2", "text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "236896835b48994d7737b9152c0e435f", "text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "title": "" } ]
[ { "docid": "bda4bdc27e9ea401abb214c3fb7c9813", "text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.", "title": "" }, { "docid": "b68da205eb9bf4a6367250c6f04d2ad4", "text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials", "title": "" }, { "docid": "7a08a183a3acec668d6405c3a9a01765", "text": "In this work, we will investigate the task of building a Question Answering system using deep neural networks augmented with a memory component. Our goal is to implement the MemNN and its extensions described in [10] and [8] and apply it on the bAbI QA tasks introduced in [9]. Unlike simulated datasets like bAbI, the vanilla MemNN system is not sufficient to achieve satisfactory performance on real-world QA datasets like Wiki QA [6] and MCTest [5]. We will explore extensions to the proposed MemNN systems to make it work on these complex datasets.", "title": "" }, { "docid": "1f5bcb6bc3fde7bc294240ce652ae4ab", "text": "Rock climbing has increased in popularity as both a recreational physical activity and a competitive sport. Climbing is physiologically unique in requiring sustained and intermittent isometric forearm muscle contractions for upward propulsion. The determinants of climbing performance are not clear but may be attributed to trainable variables rather than specific anthropometric characteristics.", "title": "" }, { "docid": "2c58791fd0f477fadf6d376ac4aaf16e", "text": "Networked digital media present new challenges for people to locate information that they can trust. At the same time, societal reliance on information that is available solely or primarily via the Internet is increasing. This article discusses how and why digitally networked communication environments alter traditional notions of trust, and presents research that examines how information consumers make judgments about the credibility and accuracy of information they encounter online. Based on this research, the article focuses on the use of cognitive heuristics in credibility evaluation. Findings from recent studies are used to illustrate the types of cognitive heuristics that information consumers employ when determining what sources and information to trust online. The article concludes with an agenda for future research that is needed to better understand the role and influence of cognitive heuristics in credibility evaluation in computer-mediated communication contexts. © 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "bd12f418cd731f9103a3d47ebac6951b", "text": "Smartphones and tablets provide access to the Web anywhere and anytime. Automatic Text Summarization techniques aim to extract the fundamental information in documents. Making automatic summarization work in portable devices is a challenge, in several aspects. This paper presents an automatic summarization application for Android devices. The proposed solution is a multi-feature language independent summarization application targeted at news articles. Several evaluation assessments were conducted and indicate that the proposed solution provides good results.", "title": "" }, { "docid": "1b638147b80419c6a4c472b02cd9916f", "text": "Herein, we report the development of highly water dispersible nanocomposite of conducting polyaniline and multiwalled carbon nanotubes (PANI-MWCNTs) via novel, `dynamic' or `stirred' liquid-liquid interfacial polymerization method using sulphonic acid as a dopant. MWCNTs were functionalized prior to their use and then dispersed in water. The nanocomposite was further subjected for physico-chemical characterization using spectroscopic (UV-Vis and FT-IR), FE-SEM analysis. The UV-VIS spectrum of the PANI-MWCNTs nanocomposite shows a free carrier tail with increasing absorption at higher wavelength. This confirms the presence of conducting emeraldine salt phase of the polyaniline and is further supported by FT-IR analysis. The FE-SEM images show that the thin layer of polyaniline is coated over the functionalized MWCNTs forming a `core-shell' like structure. The synthesized nanocomposite was found to be highly dispersible in water and shows beautiful colour change from dark green to blue with change in pH of the solution from 1 to 12 (i.e. from acidic to basic pH). The change in colour of the polyaniline-MWCNTs nanocomposite is mainly due to the pH dependent chemical transformation /change of thin layer of polyaniline.", "title": "" }, { "docid": "54d08377abbe59ada133c907f8d49eb6", "text": "To avoid injury to the perineal branches of the pudendal nerve during urinary incontinence sling procedures, a thorough knowledge of the course of these nerve branches is essential. The dorsal nerve of the clitoris (DNC) may be at risk when performing the retropubic (tension-free vaginal tape) procedure as well as the inside-out and outside-in transobturator tape procedures. The purpose of this study was to identify the anatomical relationships of the DNC to the tapes placed during the procedures mentioned and to determine the influence of body variations. In this cadaveric study, the body mass index (cBMI) of unembalmed cadavers was determined. Suburethral tape procedures were performed by a registered urologist and gynecologist on a sample of 15 female cadavers; six retropubic, seven inside-out and nine outside-in transobturator tapes were inserted. After embalmment, dissections were performed and the distances between the DNC and the tapes measured. In general the trajectory of the outside-in tape was closer to the DNC than that of the other tapes. cBMI was weakly and nonsignificantly correlated with the distance between the trajectory of the tape and the DNC for the inside-out tape and the tension-free vaginal tape, but not for the outside-in tape. The findings suggest that the DNC is less likely to be injured during the inside-out tape procedure than during the outside-in procedure, regardless of BMI. Future studies on larger samples are desirable to confirm these findings.", "title": "" }, { "docid": "63c1080df773ff57e3af8468e8d31d35", "text": "This report refers to a body of investigations performed in support of experiments aboard the Space Shuttle, and designed to counteract the symptoms of Space Adapatation Syndrome, which resemble those of motion sickness on Earth. For these supporting studies we examined the autonomic manifestations of earth-based motion sickness. Heart rate, respiration rate, finger pulse volume and basal skin resistance were measured on 127 men and women before, during and after exposure to nauseogenic rotating chair tests. Significant changes in all autonomic responses were observed across the tests (p<.05). Significant differences in autonomic responses among groups divided according to motion sickness susceptibility were also observed (p<.05). Results suggest that the examination of autonomic responses as an objective indicator of motion sickness malaise is warranted and may contribute to the overall understanding of the syndrome on Earth and in Space. DESCRIPTORS: heart rate, respiration rate, finger pulse volume, skin resistance, biofeedback, motion sickness.", "title": "" }, { "docid": "34611e88dd890c13a3b46b21be499c7b", "text": "A low-power clocking solution is presented based on fractional-N highly digital LC-phase-locked loop (PLL) and sub-sampled ring PLL targeting multi-standard SerDes applications. The shared fractional-N digital LC-PLL covers 7–10 GHz frequency range consuming only 8-mW power and occupying 0.15 mm2 of silicon area with integrated jitter of 264 fs. Frequency resolution of the LC-PLL is 2 MHz. Per lane clock is generated using wide bandwidth ring PLL covering 800 MHz to 4 GHz to support the data rates between 1 and 14 Gb/s. The ring PLL supports dither-less fractional resolution of 250 MHz, corrects I/Q error with split tuning, and achieves less than 400-fs integrated jitter. Transmitter works at 14 Gb/s with power efficiency of 0.80 pJ/bit.", "title": "" }, { "docid": "8ce15f6a0d6e5a49dcc2953530bceb19", "text": "In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation is the optimal estimation method. Our results imply that this is not the case. We first obtain an objective function that approximates the error occurred in signal restoration due to an imperfect prior model. Next, we show that in an important special case (small gaussian noise), the error is the same as the score-matching objective function, which was previously proposed as an alternative for likelihood based on purely computational considerations. Our analysis thus shows that score matching combines computational simplicity with statistical optimality in signal restoration, providing a viable alternative to maximum likelihood methods. We also show how the method leads to a new intuitive and geometric interpretation of structure inherent in probability distributions.", "title": "" }, { "docid": "dbc64c508b074f435b4175e6c8b967d5", "text": "Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.", "title": "" }, { "docid": "5c88fae140f343ae3002685ab96fd848", "text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.", "title": "" }, { "docid": "1e06f7e6b7b0d3f9a21a814e50af6e3c", "text": "The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.", "title": "" }, { "docid": "88d82f9a96ce40ed2d93d6cb9651f6be", "text": "The way developers edit day-to-day code tend to be repetitive and often use existing code elements. Many researchers tried to automate this tedious task of code changes by learning from specific change templates and applied to limited scope. The advancement of Neural Machine Translation (NMT) and the availability of the vast open source software evolutionary data open up a new possibility of automatically learning those templates from the wild. However, unlike natural languages, for which NMT techniques were originally designed, source code and the changes have certain properties. For instance, compared to natural language source code vocabulary can be virtually infinite. Further, any good change in code should not break its syntactic structure. Thus, deploying state-of-the-art NMT models without domain adaptation may poorly serve the purpose. To this end, in this work, we propose a novel Tree2Tree Neural Machine Translation system to model source code changes and learn code change patterns from the wild. We realize our model with a change suggestion engine: CODIT. We train the model with more than 30k real-world changes and evaluate it with 6k patches. Our evaluation shows the effectiveness of CODIT in learning and suggesting abstract change templates. CODIT also shows promise in suggesting concrete patches and generating bug fixes.", "title": "" }, { "docid": "f83017ad2454c465d19f70f8ba995e95", "text": "The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.", "title": "" }, { "docid": "41b712d0d485c65a8dff32725c215f97", "text": "In this article, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure, and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rockart. The system allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques.", "title": "" }, { "docid": "da5b920aa576589bc6041fa41250307f", "text": "We investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketchphoto pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep tripletranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for finegrained cross-domain ranking tasks.", "title": "" }, { "docid": "199541aa317b2ebb4d40906d974ce5f2", "text": "Experimental evidence has accumulated to suggest that biologically efficacious informational effects can be derived mimicking active compounds solely through electromagnetic distribution upon aqueous systems affecting biological systems. Empirically rigorous demonstrations of antimicrobial agent associated electromagnetic informational inhibition of MRSA, Entamoeba histolytica, Trichomonas vaginalis, Candida albicans and a host of other important and various reported effects have been evidenced, such as the electro-informational transfer of retinoic acid influencing human neuroblastoma cells and stem teratocarcinoma cells. Cell proliferation and differentiation effects from informationally affected fields interactive with aqueous systems are measured via microscopy, statistical analysis, reverse transcription polymerase chain reaction and other techniques. Information associated with chemical compounds affects biological aqueous systems, sans direct systemic exposure to the source molecule. This is a quantum effect, based on the interactivity between electromagnetic fields, and aqueous ordered coherence domains. The encoding of aqueous systems and tissue by photonic transfer and instantiation of information rather than via direct exposure to potentially toxic drugs and physical substances holds clear promise of creating inexpensive non-toxic medical treatments. Corresponding author.", "title": "" }, { "docid": "c5e401fe1b2a65677b93ae3e8aa60e18", "text": "In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.", "title": "" } ]
scidocsrr
043e9a62e6874e6f0e3a92f1b5d5cd25
Gamifying education: what is known, what is believed and what remains uncertain: a critical review
[ { "docid": "bda419b065c53853f86f7fdbf0e330f2", "text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.", "title": "" } ]
[ { "docid": "c4f0e371ea3950e601f76f8d34b736e3", "text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.", "title": "" }, { "docid": "9f6429ac22b736bd988a4d6347d8475f", "text": "The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the \"modelling view\" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or \"ontologies\") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose.", "title": "" }, { "docid": "5967c7705173ee346b4d47eb7422df20", "text": "A novel learnable dictionary encoding layer is proposed in this paper for end-to-end language identification. It is inline with the conventional GMM i-vector approach both theoretically and practically. We imitate the mechanism of traditional GMM training and Supervector encoding procedure on the top of CNN. The proposed layer can accumulate high-order statistics from variable-length input sequence and generate an utterance level fixed-dimensional vector representation. Unlike the conventional methods, our new approach provides an end-to-end learning framework, where the inherent dictionary are learned directly from the loss function. The dictionaries and the encoding representation for the classifier are learned jointly. The representation is orderless and therefore appropriate for language identification. We conducted a preliminary experiment on NIST LRE07 closed-set task, and the results reveal that our proposed dictionary encoding layer achieves significant error reduction comparing with the simple average pooling.", "title": "" }, { "docid": "41a0b9797c556368f84e2a05b80645f3", "text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.", "title": "" }, { "docid": "5006770c9f7a6fb171a060ad3d444095", "text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.", "title": "" }, { "docid": "881a495a8329c71a0202c3510e21b15d", "text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.", "title": "" }, { "docid": "57ab94ce902f4a8b0082cc4f42cd3b3f", "text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.", "title": "" }, { "docid": "0425ba0d95b98409d684b9b07b59b73a", "text": "With a shift towards usage-based billing, the questions of how data costs affect mobile Internet use and how users manage mobile data arise. In this paper, we describe a mixed-methods study of mobile phone users' data usage practices in South Africa, a country where usage-based billing is prevalent and where data costs are high, to answer these questions. We do so using a large scale survey, in-depth interviews, and logs of actual data usage over time. Our findings suggest that unlike in more developed settings, when data is limited or expensive, mobile Internet users are extremely cost-conscious, and employ various strategies to optimize mobile data usage such as actively disconnecting from the mobile Internet to save data. Based on these findings, we suggest how the Ubicomp and related research communities can better support users that need to carefully manage their data to optimize costs.", "title": "" }, { "docid": "1ac76924d3fae2bbcb7f7b84f1c2ea5e", "text": "This chapter studies ontology matching : the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications). Despite its pervasiveness, today ontology matching is still largely conducted by hand, in a labor-intensive and error-prone process. The manual matching has now become a key bottleneck in building large-scale information management systems. The advent of technologies such as the WWW, XML, and the emerging Semantic Web will further fuel information sharing applications and exacerbate the problem. Hence, the development of tools to assist in the ontology matching process has become crucial for the success of a wide variety of information management applications. In response to the above challenge, we have developed GLUE, a system that employs learning techniques to semi-automatically create semantic mappings between ontologies. We shall begin the chapter by describing a motivating example: ontology matching on the Semantic Web. Then we present our GLUE solution. Finally, we describe a set of experiments on several real-world domains, and show that GLUE proposes highly accurate semantic mappings.", "title": "" }, { "docid": "b33c7e26d3a0a8fc7fc0fb73b72840d4", "text": "As the number of Android malicious applications has explosively increased, effectively vetting Android applications (apps) has become an emerging issue. Traditional static analysis is ineffective for vetting apps whose code have been obfuscated or encrypted. Dynamic analysis is suitable to deal with the obfuscation and encryption of codes. However, existing dynamic analysis methods cannot effectively vet the applications, as a limited number of dynamic features have been explored from apps that have become increasingly sophisticated. In this work, we propose an effective dynamic analysis method called DroidWard in the aim to extract most relevant and effective features to characterize malicious behavior and to improve the detection accuracy of malicious apps. In addition to using the existing 9 features, DroidWard extracts 6 novel types of effective features from apps through dynamic analysis. DroidWard runs apps, extracts features and identifies benign and malicious apps with Support Vector Machine (SVM), Decision Tree (DTree) and Random Forest. 666 Android apps are used in the experiments and the evaluation results show that DroidWard correctly classifies 98.54% of malicious apps with 1.55% of false positives. Compared to existing work, DroidWard improves the TPR with 16.07% and suppresses the FPR with 1.31% with SVM, indicating that it is more effective than existing methods.", "title": "" }, { "docid": "de0482515de1d6134b8ff907be49d4dc", "text": "In this paper, we describe the Adaptive Place Advi sor, a conversational recommendation system designed to he lp users decide on a destination. We view the selection of destinations a an interactive, conversational process, with the advisory system in quiring about desired item characteristics and the human responding. The user model, which contains preferences regarding items, attributes, values and v lue combinations, is also acquired during the conversation. The system enhanc es the user’s requirements with the user model and retrieves suitable items fr om a case-base. If the number of items found by the system is unsuitable (too hig h, too low) the next attribute to be constrained or relaxed is selected based on t he information gain associated with the attributes. We also describe the current s tatu of the system and future work.", "title": "" }, { "docid": "c629dfdd363f1599d397ccde1f7be360", "text": "We propose a classification taxonomy over a large crawl of HTML tables on the Web, focusing primarily on those tables that express structured knowledge. The taxonomy separates tables into two top-level classes: a) those used for layout purposes, including navigational and formatting; and b) those containing relational knowledge, including listings, attribute/value, matrix, enumeration, and form. We then propose a classification algorithm for automatically detecting a subset of the classes in our taxonomy, namely layout tables and attribute/value tables. We report on the performance of our system over a large sample of manually annotated HTML tables on the Web.", "title": "" }, { "docid": "95fa1dac07ce26c1ccd64a9c86c96a22", "text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.", "title": "" }, { "docid": "a0e9e04a3b04c1974951821d44499fa7", "text": "PURPOSE\nTo examine factors related to turnover of new graduate nurses in their first job.\n\n\nDESIGN\nData were obtained from a 3-year panel survey (2006-2008) of the Graduates Occupational Mobility Survey that followed-up college graduates in South Korea. The sample consisted of 351 new graduates whose first job was as a full-time registered nurse in a hospital.\n\n\nMETHODS\nSurvival analysis was conducted to estimate survival curves and related factors, including individual and family, nursing education, hospital, and job dissatisfaction (overall and 10 specific job aspects).\n\n\nFINDINGS\nThe estimated probabilities of staying in their first job for 1, 2, and 3 years were 0.823, 0.666, and 0.537, respectively. Nurses reporting overall job dissatisfaction had significantly lower survival probabilities than those who reported themselves to be either neutral or satisfied. Nurses were more likely to leave if they were married or worked in small (vs. large), nonmetropolitan, and nonunionized hospitals. Dissatisfaction with interpersonal relationships, work content, and physical work environment was associated with a significant increase in the hazards of leaving the first job.\n\n\nCONCLUSIONS\nHospital characteristics as well as job satisfaction were significantly associated with new graduates' turnover.\n\n\nCLINICAL RELEVANCE\nThe high turnover of new graduates could be reduced by improving their job satisfaction, especially with interpersonal relationships, work content, and the physical work environment.", "title": "" }, { "docid": "7e251f86e41d01778a143c231304aa92", "text": "Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.", "title": "" }, { "docid": "b11592d07491ef9e0f67e257bfba6d84", "text": "Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape and is limited to observing restricted receptive fields. In earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we provide a detailed analysis of the previously proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters decreases. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.", "title": "" }, { "docid": "3d5eb503f837adffb4468548b3f76560", "text": "Purpose This study investigates the impact of such contingency factors as top management support, business vision, and external expertise, on the one hand, and ERP system success, on the other. Design/methodology/approach A conceptual model was developed and relevant hypotheses formulated. Surveys were conducted in two Northern European countries and a structural equation modeling technique was used to analyze the data. Originality/value It is argued that ERP systems are different from other IT implementations; as such, there is a need to provide insights as to how the aforementioned factors play out in the context of ERP system success evaluations for adopting organizations. As was predicted, the results showed that the three contingency factors positively influence ERP system success. More importantly, the relative importance of quality external expertise over the other two factors for ERP initiatives was underscored. The implications of the findings for both practitioners and researchers are discussed.", "title": "" }, { "docid": "a2bd543446fb86da6030ce7f46db9f75", "text": "This paper presents a risk assessment algorithm for automatic lane change maneuvers on highways. It is capable of reliably assessing a given highway situation in terms of the possibility of collisions and robustly giving a recommendation for lane changes. The algorithm infers potential collision risks of observed vehicles based on Bayesian networks considering uncertainties of its input data. It utilizes two complementary risk metrics (time-to-collision and minimal safety margin) in temporal and spatial aspects to cover all risky situations that can occur for lane changes. In addition, it provides a robust recommendation for lane changes by filtering out uncertain noise data pertaining to vehicle tracking. The validity of the algorithm is tested and evaluated on public highways in real traffic as well as a closed high-speed test track in simulated traffic through in-vehicle testing based on overtaking and overtaken scenarios in order to demonstrate the feasibility of the risk assessment for automatic lane change maneuvers on highways.", "title": "" }, { "docid": "75591d4da0b01f1890022b320cdab705", "text": "Many lakes in boreal and arctic regions have high concentrations of CDOM (coloured dissolved organic matter). Remote sensing of such lakes is complicated due to very low water leaving signals. There are extreme (black) lakes where the water reflectance values are negligible in almost entire visible part of spectrum (400–700 nm) due to the absorption by CDOM. In these lakes, the only water-leaving signal detectable by remote sensing sensors occurs as two peaks—near 710 nm and 810 nm. The first peak has been widely used in remote sensing of eutrophic waters for more than two decades. We show on the example of field radiometry data collected in Estonian and Swedish lakes that the height of the 810 nm peak can also be used in retrieving water constituents from remote sensing data. This is important especially in black lakes where the height of the 710 nm peak is still affected by CDOM. We have shown that the 810 nm peak can be used also in remote sensing of a wide variety of lakes. The 810 nm peak is caused by combined effect of slight decrease in absorption by water molecules and backscattering from particulate material in the water. Phytoplankton was the dominant particulate material in most of the studied lakes. Therefore, the height of the 810 peak was in good correlation with all proxies of phytoplankton biomass—chlorophyll-a (R2 = 0.77), total suspended matter (R2 = 0.70), and suspended particulate organic matter (R2 = 0.68). There was no correlation between the peak height and the suspended particulate inorganic matter. Satellite sensors with sufficient spatial and radiometric resolution for mapping lake water quality (Landsat 8 OLI and Sentinel-2 MSI) were launched recently. In order to test whether these satellites can capture the 810 nm peak we simulated the spectral performance of these two satellites from field radiometry data. Actual satellite imagery from a black lake was also used to study whether these sensors can detect the peak despite their band configuration. Sentinel 2 MSI has a nearly perfectly positioned band at 705 nm to characterize the 700–720 nm peak. We found that the MSI 783 nm band can be used to detect the 810 nm peak despite the location of this band is not in perfect to capture the peak.", "title": "" }, { "docid": "64fbd2207a383bc4b04c66e8ee867922", "text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.", "title": "" } ]
scidocsrr
2e0f343d907ea3312234a79373dbad3f
Distributing learning over time: the spacing effect in children's acquisition and generalization of science concepts.
[ { "docid": "fedfacfc850aeec1313043051a66e35b", "text": "BACKGROUND\nKnowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning.\n\n\nAIMS\nThe purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures.\n\n\nSAMPLES\nIn two classroom experiments, sixth-grade students from two schools participated (N=77 and 26).\n\n\nMETHOD\nStudents completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons.\n\n\nRESULTS\nIn both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments.\n\n\nCONCLUSIONS\nAn iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.", "title": "" }, { "docid": "277bdeccc25baa31ba222ff80a341ef2", "text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.", "title": "" } ]
[ { "docid": "d763198d3bfb1d30b153e13245c90c08", "text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.", "title": "" }, { "docid": "007a42bdf781074a2d00d792d32df312", "text": "This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions.", "title": "" }, { "docid": "c7d9353fe149c95ae0b3f1c7fa38def9", "text": "BACKGROUND\nCutaneous melanoma is often characterized by its pigmented appearance; however, up to 8.1% of such lesions contain little or no pigmentation. Amelanotic melanomas, lesions devoid of visible pigment, present a diagnostic quandary because they can masquerade as many other skin pathologies. Recognizing amelanotic melanoma is even more clinically challenging when it is first detected as a metastasis to the secondary tissue.\n\n\nMETHODS\nWe report a rare case of metastasis of an amelanotic melanoma to the parotid gland.\n\n\nRESULTS\nA 75-year-old man presented with an 8-month history of a painless, mobile, hardened mass in the right parotid region. Histopathological analysis of a fine-needle aspiration biopsy of the parotid mass indicated that the mass was melanoma. Careful clinical and radiological examination revealed an 8 mm erythematous papule in the right temporal scalp, initially diagnosed by visual examination as basal cell carcinoma. After right superficial parotidectomy, neck dissection, and excision of the temporal scalp lesion, histological examination revealed the scalp lesion to be amelanotic melanoma.\n\n\nCONCLUSION\nAlthough metastatic amelanotic melanoma to the parotid gland is a rare diagnosis, the clinician should be familiar with this presentation to increase the likelihood of making the correct diagnosis and delivering prompt treatment.", "title": "" }, { "docid": "f172ad1f92b81f5d8b19fc4687ce2853", "text": "Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of cumulative knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about cumulative knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.", "title": "" }, { "docid": "2107e4efdf7de92a850fc0142bf8c8c3", "text": "Throughout the wide range of aerial robot related applications, selecting a particular airframe is often a trade-off. Fixed-wing small-scale unmanned aerial vehicles (UAVs) typically have difficulty surveying at low altitudes while quadrotor UAVs, having more maneuverability, suffer from limited flight time. Recent prior work [1] proposes a solar-powered small-scale aerial vehicle designed to transform between fixed-wing and quad-rotor configurations. Surplus energy collected and stored while in a fixed-wing configuration is utilized while in a quad-rotor configuration. This paper presents an improvement to the robot's design in [1] by pursuing a modular airframe, an optimization of the hybrid propulsion system, and solar power electronics. Two prototypes of the robot have been fabricated for independent testing of the airframe in fixed-wing and quad-rotor states. Validation of the solar power electronics and hybrid propulsion system designs were demonstrated through a combination of simulation and empirical data from prototype hardware.", "title": "" }, { "docid": "8e3ced84f384192cfe742294dcee74bc", "text": "The construction of software cost estimation models remains an active topic of research. The basic premise of cost modelling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases, and may be detrimental to the accuracy of cost estimation models. In this paper we describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modelling. Three techniques are evaluated: listwise deletion, mean imputation and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well, with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardisation.", "title": "" }, { "docid": "a393f05d29b6d8ff011ee079154e7e58", "text": "This report provides a short survey of the field of virtual reality, highlighting application domains, technological requirements, and currently available solutions. The report is organized as follows: section 1 presents the background and motivation of virtual environment research and identifies typical application domain, section 2 discusses the characteristics a virtual reality system must have in order to exploit the perceptual and spatial skills of users, section 3 surveys current input/output devices for virtual reality, section 4 surveys current software approaches to support the creation of virtual reality systems, and section 5 summarizes the report.", "title": "" }, { "docid": "1eba8eccf88ddb44a88bfa4a937f648f", "text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.", "title": "" }, { "docid": "5f45659c16ca98f991a31d62fd70cdab", "text": "Iris recognition has legendary resistance to false matches, and the tools of information theory can help to explain why. The concept of entropy is fundamental to understanding biometric collision avoidance. This paper analyses the bit sequences of IrisCodes computed both from real iris images and from synthetic white noise iris images, whose pixel values are random and uncorrelated. The capacity of the IrisCode as a channel is found to be 0.566 bits per bit encoded, of which 0.469 bits of entropy per bit is encoded from natural iris images. The difference between these two rates reflects the existence of anatomical correlations within a natural iris, and the remaining gap from one full bit of entropy per bit encoded reflects the correlations in both phase and amplitude introduced by the Gabor wavelets underlying the IrisCode. A simple two-state hidden Markov model is shown to emulate exactly the statistics of bit sequences generated both from natural and white noise iris images, including their imposter distributions, and may be useful for generating large synthetic IrisCode databases.", "title": "" }, { "docid": "10b3c67f99ea41185f262ddd8ba50ed4", "text": "OBJECTIVE\nTo test the feasibility of short message service (SMS) usage between the clinic visits and to evaluate its effect on glycemic control in uncontrolled type 2 Diabetes Mellitus (DM) subjects.\n\n\nRESEARCH DESIGN AND METHODS\n34 cases with type 2 Diabetes were followed after fulfilling the inclusion criteria. The interventional group (n=12) had the same conventional approach of the control group but had two mobile numbers (physician and diabetic educator) provided for the SMS support until their next visit in 3 months. Both groups of age, BMI and the pre-study A1c were comparable.\n\n\nRESULTS\nBoth groups had a significant reduction in their A1c compared to their baseline visit. However, the interventional group had significantly greater reduction in A1c (p=0.001), 1.16% lower than controls. The service was highly satisfactory to the group.\n\n\nCONCLUSION\nThe results indicate effectiveness in lowering A1c and acceptance by the patients. Further research and large-scale studies are needed.", "title": "" }, { "docid": "84f2072f32d2a29d372eef0f4622ddce", "text": "This paper presents a new methodology for synthesis of broadband equivalent circuits for multi-port high speed interconnect systems from numerically obtained and/or measured frequency-domain and time-domain response data. The equivalent circuit synthesis is based on the rational function fitting of admittance matrix, which combines the frequency-domain vector fitting process, VECTFIT with its time-domain analog, TDVF to yield a robust and versatile fitting algorithm. The generated rational fit is directly converted into a SPICE-compatible circuit after passivity enforcement. The accuracy of the resulting algorithm is demonstrated through its application to the fitting of the admittance matrix of a power/ground plane structure", "title": "" }, { "docid": "f355ed837561186cff4e7492470d6ae7", "text": "Notions of Bayesian analysis are reviewed, with emphasis on Bayesian modeling and Bayesian calculation. A general hierarchical model for time series analysis is then presented and discussed. Both discrete time and continuous time formulations are discussed. An brief overview of generalizations of the fundamental hierarchical time series model concludes the article. Much of the Bayesian viewpoint can be argued (as by Jeereys and Jaynes, for examples) as direct application of the theory of probability. In this article the suggested approach for the construction of Bayesian time series models relies on probability theory to provide decompositions of complex joint probability distributions. Speciically, I refer to the familiar factorization of a joint density into an appropriate product of conditionals. Let x and y represent two random variables. I will not diierentiate between random variables and their realizations. Also, I will use an increasingly popular generic notation for probability densities: x] represents the density of x, xjy] is the conditional density of x given y, and x; y] denotes the joint density of x and y. In this notation we can write \\Bayes's Theorem\" as yjx] = xjy]]y]=x]: (1) y", "title": "" }, { "docid": "09ecaf2cb56296c8097525b2c1ffb7dc", "text": "Fruit and vegetables classification and recognition are still challenging in daily production and life. In this paper, we propose an efficient fruit and vegetables classification system using image saliency to draw the object regions and convolutional neural network (CNN) model to extract image features and implement classification. Image saliency is utilized to select main saliency regions according to saliency map. A VGG model is chosen to train for fruit and vegetables classification. Another contribution in this paper is that we establish a fruit and vegetables images database spanning 26 categories, which covers the major types in real life. Experiments are conducted on our own database, and the results show that our classification system achieves an excellent accuracy rate of 95.6%.", "title": "" }, { "docid": "1404323d435b1b7999feda249f817f36", "text": "The Process of Encryption and Decryption is performed by using Symmetric key cryptography and public key cryptography for Secure Communication. In this paper, we studied that how the process of Encryption and Decryption is perform in case of Symmetric key and public key cryptography using AES and DES algorithms and modified RSA algorithm.", "title": "" }, { "docid": "0c61bfbb7106c5592ecb9677e617f83f", "text": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (COPD) are associated with accelerated decline in lung function, diminished quality of life, and higher mortality. Proactively monitoring patients for early signs of an exacerbation and treating them early could prevent these outcomes. The emergence of affordable wearable technology allows for nearly continuous monitoring of heart rate and physical activity as well as recording of audio which can detect features such as coughing. These signals may be able to be used with predictive analytics to detect early exacerbations. Prior to full development, however, it is important to determine the feasibility of using wearable devices such as smartwatches to intensively monitor patients with COPD.\n\n\nOBJECTIVE\nWe conducted a feasibility study to determine if patients with COPD would wear and maintain a smartwatch consistently and whether they would reliably collect and transmit sensor data.\n\n\nMETHODS\nPatients with COPD were recruited from 3 hospitals and were provided with a smartwatch that recorded audio, heart rate, and accelerations. They were asked to wear and charge it daily for 90 days. They were also asked to complete a daily symptom diary. At the end of the study period, participants were asked what would motivate them to regularly use a wearable for monitoring of their COPD.\n\n\nRESULTS\nOf 28 patients enrolled, 16 participants completed the full 90 days. The average age of participants was 68.5 years, and 36% (10/28) were women. Survey, heart rate, and activity data were available for an average of 64.5, 65.1, and 60.2 days respectively. Technical issues caused heart rate and activity data to be unavailable for approximately 13 and 17 days, respectively. Feedback provided by participants indicated that they wanted to actively engage with the smartwatch and receive feedback about their activity, heart rate, and how to better manage their COPD.\n\n\nCONCLUSIONS\nSome patients with COPD will wear and maintain smartwatches that passively monitor audio, heart rate, and physical activity, and wearables were able to reliably capture near-continuous patient data. Further work is necessary to increase acceptability and improve the patient experience.", "title": "" }, { "docid": "137eb8a6a90f628353b854995f88a46c", "text": "How should we gather information to make effective decisions? We address Bayesian active learning and experimental design problems, where we sequentially select tests to reduce uncertainty about a set of hypotheses. Instead ofminimizing uncertainty per se, we consider a set of overlapping decision regions of these hypotheses. Our goal is to drive uncertainty into a single decision region as quickly as possible. We identify necessary and sufficient conditions for correctly identifying a decision region that contains all hypotheses consistent with observations. We develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove that is competitive with the intractable optimal policy. Our efficient implementation of the algorithm relies on computing subsets of the complete homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on two practical applications: approximate comparison-based learning and active localization using a robotmanipulator.", "title": "" }, { "docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3", "text": "3", "title": "" }, { "docid": "d6976dd4280c0534049c33ff9efb2058", "text": "Bitcoin, as well as many of its successors, require the whole transaction record to be reliably acquired by all nodes to prevent double-spending. Recently, many blockchains have been proposed to achieve scale-out throughput by letting nodes only acquire a fraction of the whole transaction set. However, these schemes, e.g., sharding and off-chain techniques, suffer from a degradation in decentralization or the capacity of fault tolerance. In this paper, we show that the complete set of transactions is not a necessity for the prevention of double-spending if the properties of value transfers is fully explored. In other words, we show that a value-transfer ledger like Bitcoin has the potential to scale-out by its nature without sacrificing security or decentralization. Firstly, we give a formal definition for the value-transfer ledger and its distinct features from a generic database. Then, we introduce the blockchain structure with a shared main chain for consensus and an individual chain for each node for recording transactions. A locally executable validation scheme is proposed with uncompromising validity and consistency. A beneficial consequence of our design is that nodes will spontaneously try to reduce their transmission cost by only providing the transactions needed to show that their transactions are not double spend. As a result, the network is sharded as each node only acquires part of the transaction record and a scale-out throughput could be achieved, which we call \"spontaneous sharding\".", "title": "" }, { "docid": "435fdb671cc12959d2d971b847f851a4", "text": "In volume data visualization, the classification step is used to determine voxel visibility and is usually carried out through the interactive editing of a transfer function that defines a mapping between voxel value and color/opacity. This approach is limited by the difficulties in working effectively in the transfer function space beyond two dimensions. We present a new approach to the volume classification problem which couples machine learning and a painting metaphor to allow more sophisticated classification in an intuitive manner. The user works in the volume data space by directly painting on sample slices of the volume and the painted voxels are used in an iterative training process. The trained system can then classify the entire volume. Both classification and rendering can be hardware accelerated, providing immediate visual feedback as painting progresses. Such an intelligent system approach enables the user to perform classification in a much higher dimensional space without explicitly specifying the mapping for every dimension used. Furthermore, the trained system for one data set may be reused to classify other data sets with similar characteristics.", "title": "" }, { "docid": "a09cfa27c7e5492c6d09b3dff7171588", "text": "This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) increase the breadth of the search for relevant studies, 2) search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) conduct more studies on estimation methods commonly used by the software industry, and 4) increase the awareness of how properties of the data sets impact the results when evaluating estimation methods", "title": "" } ]
scidocsrr
01b07f7424400a18d41568dc61d6f9a6
Requirements Prioritization Challenges in Practice
[ { "docid": "42045752f292585bf20ad960f2b30469", "text": "eveloping software systems that meet stakeholders' needs and expectations is the ultimate goal of any software provider seeking a competitive edge. To achieve this, you must effectively and accurately manage your stakeholders' system requirements: the features, functions , and attributes they need in their software system. 1 Once you agree on these requirements, you can use them as a focal point for the development process and produce a software system that meets the expectations of both customers and users. However, in real-world software development, there are usually more requirements than you can implement given stakeholders' time and resource constraints. Thus, project managers face a dilemma: How do you select a subset of the customers' requirements and still produce a system that meets their needs? Deciding which requirements really matter is a difficult task and one increasingly demanded because of time and budget constraints. The authors developed a cost–value approach for prioritizing requirements and applied it to two commercial projects.", "title": "" } ]
[ { "docid": "cb11841b664bbb83c3ff5ff25f62f71e", "text": "*Iheanacho Stanley Chidi and Nworu Shedrack Department of Fisheries and Aquaculture, Federal University Ndufu Alike Ikwo, Ebonyi State, Nigeria 2 Department of Fisheries and Aquaculture, Ebonyi State University Abakaliki, Ebonyi State, Nigeria Corresponding author: [email protected], +2348063279905 Abstract This experiment was carried out to investigate the effect of Siam Weed (chloromelena odorata) on the heamatology of Clarias gariepinus juvenile. A total of one hundred and fifty (150) juvenile of Clarias gariepinus were randomly assigned to different concentrations of C. odorata leave aqueous extract in a completely randomize design (CRD). The concentrations were 50mg/l, 100mg/l, 150mg/l, 200mg/l. Distilled water (0.00 mg/l) was used as the control. The fish exhibited stressful behavior which was higher as the concentration of Chromolaena odorata leave extract increased. There was a gradual decrease with time until a state of calmness, which was subsequently followed by death. The effect on 96hr exposed period was recorded and blood samples collected at 24hr and 96hr interval. Result on hematological parameters revealed significant difference (P<0.05) among treatments with increase in exposure time for all the blood parameters. C. odorata at increased concentrations affected the behavior and hematology of C. gariepinus.", "title": "" }, { "docid": "4863316487ead3c1b459dc43d9ed17d5", "text": "The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced reidentification ratio compared to non-overlapping cases.", "title": "" }, { "docid": "bc4b1b48794f9db934c705ef3821cdcf", "text": "Expanding access to financial services holds the promise to help reduce poverty and spur economic development. But, as a practical matter, commercial banks have faced challenges expanding access to poor and low-income households in developing economies, and nonprofits have had limited reach. We review recent innovations that are improving the quantity and quality of financial access. They are taking possibilities well beyond early models centered on providing “microcredit” for small business investment. We focus on new credit mechanisms and devices that help households manage cash flows, save, and cope with risk. Our eye is on contract designs, product innovations, regulatory policy, and ultimately economic and social impacts. We relate the innovations and empirical evidence to theoretical ideas, drawing links in particular to new work in behavioral economics and to randomized evaluation methods.", "title": "" }, { "docid": "8686ffed021b68574b4c3547d361eac8", "text": "* To whom all correspondence should be addressed. Abstract Face detection is an important prerequisite step for successful face recognition. The performance of previous face detection methods reported in the literature is far from perfect and deteriorates ungracefully where lighting conditions cannot be controlled. We propose a method that outperforms state-of-the-art face detection methods in environments with stable lighting. In addition, our method can potentially perform well in environments with variable lighting conditions. The approach capitalizes upon our near-IR skin detection method reported elsewhere [13][14]. It ascertains the existence of a face within the skin region by finding the eyes and eyebrows. The eyeeyebrow pairs are determined by extracting appropriate features from multiple near-IR bands. Very successful feature extraction is achieved by simple algorithmic means like integral projections and template matching. This is because processing is constrained in the skin region and aided by the near-IR phenomenology. The effectiveness of our method is substantiated by comparative experimental results with the Identix face detector [5].", "title": "" }, { "docid": "12489fe406fa53c6c815ed99a4805f72", "text": "This paper presents the systems submitted by the Abu-MaTran project to the Englishto-Finnish language pair at the WMT 2016 news translation task. We applied morphological segmentation and deep learning in order to address (i) the data scarcity problem caused by the lack of in-domain parallel data in the constrained task and (ii) the complex morphology of Finnish. We submitted a neural machine translation system, a statistical machine translation system reranked with a neural language model and the combination of their outputs tuned on character sequences. The combination and the neural system were ranked first and second respectively according to automatic evaluation metrics and tied for the first place in the human evaluation.", "title": "" }, { "docid": "21d9828d0851b4ded34e13f8552f3e24", "text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.", "title": "" }, { "docid": "906aa7f6b047cbf5fac9bd3a88422119", "text": "Many computer vision approaches take for granted positive answers to questions such as “Are semantic categories visually separable?” and “Is visual similarity correlated to semantic similarity?”. In this paper, we study experimentally whether these assumptions hold and show parallels to questions investigated in cognitive science about the human visual system. The insights gained from our analysis enable building a novel distance function between images assessing whether they are from the same basic-level category. This function goes beyond direct visual distance as it also exploits semantic similarity measured through ImageNet. We demonstrate experimentally that it outperforms purely visual distances.", "title": "" }, { "docid": "88c0789e82c86b0e730480f44712012d", "text": "In spite of their having sufficient immunogenicity, tumor vaccines remain largely ineffective. The mechanisms underlying this lack of efficacy are still unclear. Here we report a previously undescribed mechanism by which the tumor endothelium prevents T cell homing and hinders tumor immunotherapy. Transcriptional profiling of microdissected tumor endothelial cells from human ovarian cancers revealed genes associated with the absence or presence of tumor-infiltrating lymphocytes (TILs). Overexpression of the endothelin B receptor (ETBR) was associated with the absence of TILs and short patient survival time. The ETBR inhibitor BQ-788 increased T cell adhesion to human endothelium in vitro, an effect countered by intercellular adhesion molecule-1 (ICAM-1) blockade or treatment with NO donors. In mice, ETBR neutralization by BQ-788 increased T cell homing to tumors; this homing required ICAM-1 and enabled tumor response to otherwise ineffective immunotherapy in vivo without changes in systemic antitumor immune response. These findings highlight a molecular mechanism with the potential to be pharmacologically manipulated to enhance the efficacy of tumor immunotherapy in humans.", "title": "" }, { "docid": "b4a5ebf335cc97db3790c9e2208e319d", "text": "We examine whether conservative white males are more likely than are other adults in the U.S. general public to endorse climate change denial. We draw theoretical and analytical guidance from the identityprotective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from ten Gallup surveys from 2001 to 2010, focusing specifically on five indicators of climate change denial. We find that conservative white males are significantly more likely than are other Americans to endorse denialist views on all five items, and that these differences are even greater for those conservative white males who self-report understanding global warming very well. Furthermore, the results of our multivariate logistic regression models reveal that the conservative white male effect remains significant when controlling for the direct effects of political ideology, race, and gender as well as the effects of nine control variables. We thus conclude that the unique views of conservative white males contribute significantly to the high level of climate change denial in the United States. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ee65be73600a223c6d4b1a7a2773228c", "text": "In this paper a highly configurable, real-time analysis system to automatically record, analyze and visualize high level aggregated information of user interventions in Twitter is described. The system is designed to provide public entities with a powerful tool to rapidly and easily understand what the citizen behavior trends are, what their opinion about city services, events, etc. is, and also may used as a primary alert system that may improve the efficiency of emergency systems. The citizen is here observed as a proactive city sensor capable of generating huge amounts of very rich, high-level and valuable data through social media platforms, which, after properly processed, summarized and annotated, allows city administrators to better understand citizen necessities. The architecture and component blocks are described and some key details of the design, implementation and scenarios of application are discussed.", "title": "" }, { "docid": "a5e01cfeb798d091dd3f2af1a738885b", "text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.", "title": "" }, { "docid": "4507c71798a856be64381d7098f30bf4", "text": "Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbationbased framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an \"adversarial-example generator\". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.", "title": "" }, { "docid": "aabef3695f38fdf565700e5e374098fd", "text": "T are two broad categories of risk affecting supply chain design and management: (1) risks arising from the problems of coordinating supply and demand, and (2) risks arising from disruptions to normal activities. This paper is concerned with the second category of risks, which may arise from natural disasters, from strikes and economic disruptions, and from acts of purposeful agents, including terrorists. The paper provides a conceptual framework that reflects the joint activities of risk assessment and risk mitigation that are fundamental to disruption risk management in supply chains. We then consider empirical results from a rich data set covering the period 1995–2000 on accidents in the U.S. Chemical Industry. Based on these results and other literature, we discuss the implications for the design of management systems intended to cope with supply chain disruption risks.", "title": "" }, { "docid": "3f69a0665c1c076ae6bc1219d03c8b66", "text": "A compact, dielectric-loaded, and wideband log-periodic dipole array antenna operating between 200 and 803 MHz for VSWR less than 2 is presented in this letter. The antenna consists of sinusoidal dipoles and parallel stripline printed on a substrate partially loading an air layer, as well as two stepped dielectric materials with $\\varepsilon _{{{r}}}= 10$. These two stepped dielectric materials are placed on both sides of the substrate. It is found that a remarkable lower frequency performance is obtained by using these two stepped dielectric materials, and the antenna's impedance matching performance is improved through the air layer partially loaded in the substrate. Good agreement is achieved between simulated and measured results. The antenna is also shown to work normally through the experimental results of VSWR under the 100 W continuous-wave power excitation for 30 min.", "title": "" }, { "docid": "771834bc4bfe8231fe0158ec43948bae", "text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.", "title": "" }, { "docid": "b249fe89bcfc985fcb4f9128d12c28b3", "text": "Prevalent matrix completion methods capture only the low-rank property which gives merely a constraint that the data points lie on some low-dimensional subspace, but generally ignore the extra structures (beyond low-rank) that specify in more detail how the data points lie on the subspace. Whenever the data points are not uniformly distributed on the low-dimensional subspace, the row-coherence of the target matrix to recover could be considerably high and, accordingly, prevalent methods might fail even if the target matrix is fairly low-rank. To relieve this challenge, we suggest to consider a model termed low-rank factor decomposition (LRFD), which imposes an additional restriction that the data points must be represented as linear, compressive combinations of the bases in a given dictionary. We show that LRFD can effectively mitigate the challenges of high row-coherence, provided that its dictionary is configured properly. Namely, it is mathematically proven that if the dictionary is well-conditioned and low-rank, then LRFD can weaken the dependence on the row-coherence. In particular, if the dictionary itself is low-rank, then the dependence on the row-coherence can be entirely removed. Subsequently, we devise two practical algorithms to obtain proper dictionaries in unsupervised environments: one uses the existing matrix completion methods to construct the dictionary in LRFD, and the other tries to learn a proper dictionary from the data given. Experiments on randomly generated matrices and motion datasets show superior performance of our proposed algorithms.", "title": "" }, { "docid": "7853e5d07f303c4802c0f7c73fe98edf", "text": "BACKGROUND\nCryopreservation of semen should be offered to adults before gonadotoxic treatment. However, the experience with semen collection in adolescents is still limited. The objective of this study was to evaluate potential correlates of successful semen sampling in adolescents.\n\n\nMETHODS\nA total of 86 boys (aged 12.2-17.9 years), referred for cryopreservation of semen prior to gonadotoxic treatment were included. Age, testicular volume, diagnosis and reproductive hormones were evaluated as correlates of successful semen collection.\n\n\nRESULTS\nMedian sperm concentration was 9.6 (range 0-284) million/ml. Of the 86 included boys, 76 (88.4%) had spermatozoa in their ejaculate. Of the 76 patients for whom a semen sample was obtained, 71 (93.4%) had motile spermatozoa eligible for cryopreservation. Of the 86 boys, 74 produced a semen sample by masturbation, whereas semen samples were obtained from 12 patients by penile vibration or electroejaculation. The youngest patient with an ejaculate containing motile spermatozoa was 12.2 years old, and the smallest testicular volumes in boys associated with motile spermatozoa in the ejaculate were 6-7 ml. Testicular volume correlated with sperm concentration (R = 0.283, P = 0.046), and the percentage of motile spermatozoa (R = 0.410, P = 0.003). Chronological age, but not reproductive hormones, also correlated with sperm concentration (R = 0.25, P = 0.049).\n\n\nCONCLUSIONS\nSemen was successfully collected and cryopreserved in 71 out of 86 boys and adolescents. Testicular volume, but not age or reproductive hormone levels, was indicative of successful semen collection. Regardless of their age, adolescent boys with testicular volumes of more than 5 ml should be offered semen banking prior to gonadotoxic treatment or other procedures that could potentially damage future fertility.", "title": "" }, { "docid": "c6abeae6e9287f04b472595a47e974ad", "text": "Data curation is the act of discovering a data source(s) of interest, cleaning and transforming the new data, semantically integrating it with other local data sources, and deduplicating the resulting composite. There has been much research on the various components of curation (especially data integration and deduplication). However, there has been little work on collecting all of the curation components into an integrated end-to-end system. In addition, most of the previous work will not scale to the sizes of problems that we are finding in the field. For example, one web aggregator requires the curation of 80,000 URLs and a second biotech company has the problem of curating 8000 spreadsheets. At this scale, data curation cannot be a manual (human) effort, but must entail machine learning approaches with a human assist only when necessary. This paper describes Data Tamer, an end-to-end curation system we have built at M.I.T. Brandeis, and Qatar Computing Research Institute (QCRI). It expects as input a sequence of data sources to add to a composite being constructed over time. A new source is subjected to machine learning algorithms to perform attribute identification, grouping of attributes into tables, transformation of incoming data and deduplication. When necessary, a human can be asked for guidance. Also, Data Tamer includes a data visualization component so a human can examine a data source at will and specify manual transformations. We have run Data Tamer on three real world enterprise curation problems, and it has been shown to lower curation cost by about 90%, relative to the currently deployed production software. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6th Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.", "title": "" } ]
scidocsrr
f6d8b57317b9b054453e22c65e37e879
5G cellular: key enabling technologies and research challenges
[ { "docid": "f84c399ff746a8721640e115fd20745e", "text": "Self-interference cancellation invalidates a long-held fundamental assumption in wireless network design that radios can only operate in half duplex mode on the same channel. Beyond enabling true in-band full duplex, which effectively doubles spectral efficiency, self-interference cancellation tremendously simplifies spectrum management. Not only does it render entire ecosystems like TD-LTE obsolete, it enables future networks to leverage fragmented spectrum, a pressing global issue that will continue to worsen in 5G networks. Self-interference cancellation offers the potential to complement and sustain the evolution of 5G technologies toward denser heterogeneous networks and can be utilized in wireless communication systems in multiple ways, including increased link capacity, spectrum virtualization, any-division duplexing (ADD), novel relay solutions, and enhanced interference coordination. By virtue of its fundamental nature, self-interference cancellation will have a tremendous impact on 5G networks and beyond.", "title": "" } ]
[ { "docid": "bfd19a8b2c11c9c3083b358f72314fc5", "text": "Changes in temperature, precipitation, and other climatic drivers and sea-level rise will affect populations of existing native and non-native aquatic species and the vulnerability of aquatic environments to new invasions. Monitoring surveys provide the foundation for assessing the combined effects of climate change and invasions by providing baseline biotic and environmental conditions, although the utility of a survey depends on whether the results are quantitative or qualitative, and other design considerations. The results from a variety of monitoring programs in the United States are available in integrated biological information systems, although many include only non-native species, not native species. Besides including natives, we suggest these systems could be improved through the development of standardized methods that capture habitat and physiological requirements and link regional and national biological databases into distributed Web portals that allow drawing information from multiple sources. Combining the outputs from these biological information systems with environmental data would allow the development of ecological-niche models that predict the potential distribution or abundance of native and non-native species on the basis of current environmental conditions. Environmental projections from climate models can be used in these niche models to project changes in species distributions or abundances under altered climatic conditions and to identify potential high-risk invaders. There are, however, a number of challenges, such as uncertainties associated with projections from climate and niche models and difficulty in integrating data with different temporal and spatial granularity. Even with these uncertainties, integration of biological and environmental information systems, niche models, and climate projections would improve management of aquatic ecosystems under the dual threats of biotic invasions and climate change.", "title": "" }, { "docid": "f20c08bd1194f8589d6e56e66951a7f8", "text": "The computational complexity grows exponentially for multi-level thresholding (MT) with the increase of the number of thresholds. Taking Kapur’s entropy as the optimized objective function, the paper puts forward the modified quick artificial bee colony algorithm (MQABC), which employs a new distance strategy for neighborhood searches. The experimental results show that MQABC can search out the optimal thresholds efficiently, precisely, and speedily, and the thresholds are very close to the results examined by exhaustive searches. In comparison to the EMO (Electro-Magnetism optimization), which is based on Kapur’s entropy, the classical ABC algorithm, and MDGWO (modified discrete grey wolf optimizer) respectively, the experimental results demonstrate that MQABC has exciting advantages over the latter three in terms of the running time in image thesholding, while maintaining the efficient segmentation quality.", "title": "" }, { "docid": "80114263a722c25125803c7c8ecebb91", "text": "features suggest that this patient is an atypical presentation of chemotherapy-induced acral erythema, sparing the classic palmar location. The suggestion for an overlapping spectrum of chemotherapyinduced toxic injury of the skin helps resolve the clinicopathological challenge of this case. Toxic erythema of chemotherapy describes a particular category of toxin-associated diseases, some of which are specific, eg, chemotherapyassociated neutrophilic hidradenitis, and others, such as the eruption presented, defy further classification. Although dermatologists will likely preserve some of their preferred appellations, the field of dermatology will benefit from including toxic erythema of chemotherapy within the conceptual framework of chemotherapy-associated dermatoses.", "title": "" }, { "docid": "5168f7f952d937460d250c44b43f43c0", "text": "This letter presents the design of a coplanar waveguide (CPW) circularly polarized antenna for the central frequency 900 MHz, it comes in handy for radio frequency identification (RFID) short-range reading applications within the band of 902-928 MHz where the axial ratio of proposed antenna model is less than 3 dB. The proposed design has an axial-ratio bandwidth of 36 MHz (4%) and impedance bandwidth of 256 MHz (28.5%).", "title": "" }, { "docid": "b0e316e2efe4b408985216a33492897b", "text": "Human activity detection within smart homes is one of the basis of unobtrusive wellness monitoring of a rapidly aging population in developed countries. Most works in this area use the concept of \"activity\" as the building block with which to construct applications such as healthcare monitoring or ambient assisted living. The process of identifying a specific activity encompasses the selection of the appropriate set of sensors, the correct preprocessing of their provided raw data and the learning/reasoning using this information. If the selection of the sensors and the data processing methods are wrongly performed, the whole activity detection process may fail, leading to the consequent failure of the whole application. Related to this, the main contributions of this review are the following: first, we propose a classification of the main activities considered in smart home scenarios which are targeted to older people's independent living, as well as their characterization and formalized context representation; second, we perform a classification of sensors and data processing methods that are suitable for the detection of the aforementioned activities. Our aim is to help researchers and developers in these lower-level technical aspects that are nevertheless fundamental for the success of the complete application.", "title": "" }, { "docid": "2b30506690acbae9240ef867e961bc6c", "text": "Background Breast milk can turn pink with Serratia marcescens colonization, this bacterium has been associated with several diseases and even death. It is seen most commonly in the intensive care settings. Discoloration of the breast milk can lead to premature termination of nursing. We describe two cases of pink-colored breast milk in which S. marsescens was isolated from both the expressed breast milk. Antimicrobial treatment was administered to the mothers. Return to breastfeeding was successful in both the cases. Conclusions Pink breast milk is caused by S. marsescens colonization. In such cases,early recognition and treatment before the development of infection is recommended to return to breastfeeding.", "title": "" }, { "docid": "b169a813dcaa659555f082911bcc843f", "text": "Pharmacogenomics studies the impact of genetic variation of patients on drug responses and searches for correlations between gene expression or Single Nucleotide Polymorphisms (SNPs) of patient's genome and the toxicity or efficacy of a drug. SNPs data, produced by microarray platforms, need to be preprocessed and analyzed in order to find correlation between the presence/absence of SNPs and the toxicity or efficacy of a drug. Due to the large number of samples and the high resolution of instruments, the data to be analyzed can be very huge, requiring high performance computing. The paper presents the design and experimentation of Cloud4SNP, a novel Cloud-based bioinformatics tool for the parallel preprocessing and statistical analysis of pharmacogenomics SNP microarray data. Experimental evaluation shows good speed-up and scalability. Moreover, the availability on the Cloud platform allows to face in an elastic way the requirements of small as well as very large pharmacogenomics studies.", "title": "" }, { "docid": "fd0cfef7be75a9aa98229c25ffaea864", "text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "title": "" }, { "docid": "945b8c26961fb3a2329b6356b853b358", "text": "This paper presents a synteny visualization and analysis tool developed in connection with IMAS - the Interactive Multigenomic Analysis System. This visual analysis tool enables biologists to analyze the relationships among genomes of closely related organisms in terms of the locations of genes and clusters of genes. A biologist starts IMAS with the DNA sequence, uses BLAST to find similar genes in related sequences, and uses these similarity linkages to create an enhanced node-link diagram of syntenic sequences. We refer to this as Spring Synteny visualization, which is aimed at helping a biologist discover similar gene ordering relationships across species. The paper describes the techniques that are used to support synteny visualization, in terms of computation, visual design, and interaction design.", "title": "" }, { "docid": "0ce05b9c26df484fc59366762d31465a", "text": "This paper presents an algorithm that extracts the tempo of a musical excerpt. The proposed system assumes a constant tempo and deals directly with the audio signal. A sliding window is applied to the signal and two feature classes are extracted. The first class is the log-energy of each band of a mel-scale triangular filterbank, a common feature vector used in various MIR applications. For the second class, a novel feature for the tempo induction task is presented; the strengths of the twelve western musical tones at all octaves are calculated for each audio frame, in a similar fashion with Pitch Class Profile. The timeevolving feature vectors are convolved with a bank of resonators, each resonator corresponding to a target tempo. Then the results of each feature class are combined to give the final output. The algorithm was evaluated on the popular ISMIR 2004 Tempo Induction Evaluation Exchange Dataset. Results demonstrate that the superposition of the different types of features enhance the performance of the algorithm, which is in the current state-of-the-art algorithms of the tempo induction task.", "title": "" }, { "docid": "712be4d6aabf8e76b050c30e6241ad0f", "text": "The United States, like many nations, continues to experience rapid growth in its racial minority population and is projected to attain so-called majority-minority status by 2050. Along with these demographic changes, staggering racial disparities persist in health, wealth, and overall well-being. In this article, we review the social psychological literature on race and race relations, beginning with the seemingly simple question: What is race? Drawing on research from different fields, we forward a model of race as dynamic, malleable, and socially constructed, shifting across time, place, perceiver, and target. We then use classic theoretical perspectives on intergroup relations to frame and then consider new questions regarding contemporary racial dynamics. We next consider research on racial diversity, focusing on its effects during interpersonal encounters and for groups. We close by highlighting emerging topics that should top the research agenda for the social psychology of race and race relations in the twenty-first century.", "title": "" }, { "docid": "8a538c63adfd618d8967f736d8c59761", "text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).", "title": "" }, { "docid": "0ce5f897c55f40451878e37a4da1c91c", "text": "The analysis of drainage morphometry is usually a prerequisite to the assessment of hydrological characteristics of surface water basin. In this study, the western region of the Arabian Peninsula was selected for detailed morphometric analysis. In this region, there are a large number of drainage systems that are originated from the mountain chains of the Arabian Shield to the east and outlet into the Red Sea. As a typical type of these drainage systems, the morphometry of Wadi Aurnah was analyzed. The study performed manual and computerized delineation and drainage sampling, which enables applying detailed morphological measures. Topographic maps in combination with remotely sensed data, (i.e. different types of satellite images) were utilized to delineate the existing drainage system, thus to identify precisely water divides. This was achieved using Geographic Information System (GIS) to provide computerized data that can be manipulated for different calculations and hydrological measures. The obtained morhpometric analysis in this study tackled: 1) stream behavior, 2) morphometric setting of streams within the drainage system and 3) interrelation between connected streams. The study introduces an imperial approach of morphometric analysis that can be utilized in different hydrological assessments (e.g., surface water harvesting, flood mitigation, etc). As well as, the applied analysis using remote sensing and GIS can be followed in the rest drainage systems of the Western Arabian Peninsula.", "title": "" }, { "docid": "0251f38f48c470e2e04fb14fc7ba34b2", "text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.", "title": "" }, { "docid": "91e4994a20bb3b48ef3d70c3affa5c0c", "text": "In this paper, we address the challenging task of simultaneous recognition of overlapping sound events from single channel audio. Conventional frame-based methods aren’t well suited to the problem, as each time frame contains a mixture of information from multiple sources. Missing feature masks are able to improve the recognition in such cases, but are limited by the accuracy of the mask, which is a non-trivial problem. In this paper, we propose an approach based on Local Spectrogram Features (LSFs) which represent local spectral information that is extracted from the two-dimensional region surrounding “keypoints” detected in the spectrogram. The keypoints are designed to locate the sparse, discriminative peaks in the spectrogram, such that we can model sound events through a set of representative LSF clusters and their occurrences in the spectrogram. To recognise overlapping sound events, we use a Generalised Hough Transform (GHT) voting system, which sums the information over many independent keypoints to produce onset hypotheses, that can detect any arbitrary combination of sound events in the spectrogram. Each hypothesis is then scored against the class distribution models to recognise the existence of the sound in the spectrogram. Experiments on a set of five overlapping sound events, in the presence of non-stationary background noise, demonstrates the potential of our approach.", "title": "" }, { "docid": "1dc0d5c7dbc0ae85a424b17e463bd7a4", "text": "Plasma protein binding (PPB) strongly affects drug distribution and pharmacokinetic behavior with consequences in overall pharmacological action. Extended plasma protein binding may be associated with drug safety issues and several adverse effects, like low clearance, low brain penetration, drug-drug interactions, loss of efficacy, while influencing the fate of enantiomers and diastereoisomers by stereoselective binding within the body. Therefore in holistic drug design approaches, where ADME(T) properties are considered in parallel with target affinity, considerable efforts are focused in early estimation of PPB mainly in regard to human serum albumin (HSA), which is the most abundant and most important plasma protein. The second critical serum protein α1-acid glycoprotein (AGP), although often underscored, plays also an important and complicated role in clinical therapy and thus the last years it has been studied thoroughly too. In the present review, after an overview of the principles of HSA and AGP binding as well as the structure topology of the proteins, the current trends and perspectives in the field of PPB predictions are presented and discussed considering both HSA and AGP binding. Since however for the latter protein systematic studies have started only the last years, the review focuses mainly to HSA. One part of the review highlights the challenge to develop rapid techniques for HSA and AGP binding simulation and their performance in assessment of PPB. The second part focuses on in silico approaches to predict HSA and AGP binding, analyzing and evaluating structure-based and ligand-based methods, as well as combination of both methods in the aim to exploit the different information and overcome the limitations of each individual approach. Ligand-based methods use the Quantitative Structure-Activity Relationships (QSAR) methodology to establish quantitate models for the prediction of binding constants from molecular descriptors, while they provide only indirect information on binding mechanism. Efforts for the establishment of global models, automated workflows and web-based platforms for PPB predictions are presented and discussed. Structure-based methods relying on the crystal structures of drug-protein complexes provide detailed information on the underlying mechanism but are usually restricted to specific compounds. They are useful to identify the specific binding site while they may be important in investigating drug-drug interactions, related to PPB. Moreover, chemometrics or structure-based modeling may be supported by experimental data a promising integrated alternative strategy for ADME(T) properties optimization. In the case of PPB the use of molecular modeling combined with bioanalytical techniques is frequently used for the investigation of AGP binding.", "title": "" }, { "docid": "604b46c973be0a277faa96a407dc845f", "text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.", "title": "" }, { "docid": "221970fad528f2538930556dde7a0062", "text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (B-CNN), which has shown dramatic performance gains on certain fine-grained recognition problems [15]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [12]. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computerized face detection system, it does not have the bias inherent in such a database. We demonstrate the performance of the B-CNN model beginning from an AlexNet-style network pre-trained on ImageNet. We then show results for fine-tuning using a moderate-sized and public external database, FaceScrub [17]. We also present results with additional fine-tuning on the limited training data provided by the protocol. In each case, the fine-tuned bilinear model shows substantial improvements over the standard CNN. Finally, we demonstrate how a standard CNN pre-trained on a large face database, the recently released VGG-Face model [20], can be converted into a B-CNN without any additional feature training. This B-CNN improves upon the CNN performance on the IJB-A benchmark, achieving 89.5% rank-1 recall.", "title": "" }, { "docid": "c3ae2b20405aa932bb5ada3874cdd29c", "text": "In this letter, a novel compact quadrature hybrid using low-pass and high-pass lumped elements is proposed. This proposed topology enables significant circuit size reduction in comparison with former approaches applying microstrip branch line or Lange couplers. In addition, it provides wider bandwidth in terms of operational frequency, and provides more convenience to the monolithic microwave integrated circuit layout since it does not have any bulky via holes as compared to those with lumped elements that have been published. In addition, the simulation and measurement of the fabricated hybrid implemented using PHEMT processes are evidently good. With the operational bandwidth ranging from 25 to 30 GHz, the measured results of the return loss are better than 17.6 dB, and the insertion losses of coupled and direct ports are approximately 3.4plusmn0.7 dB, while the relative phase difference is approximately 92.3plusmn1.4deg. The core dimension of the circuit is 0.4 mm times 0.15 mm.", "title": "" }, { "docid": "be1b9731df45408571e75d1add5dfe9c", "text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.", "title": "" } ]
scidocsrr
cf64cb7e4f868d151334e45a7aae2382
Investigating the Relationship between Organizational Innovation Capability and Firm Performance with Irish SMEs
[ { "docid": "4a89f20c4b892203be71e3534b32449c", "text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.", "title": "" }, { "docid": "b269bb721ca2a75fd6291295493b7af8", "text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.", "title": "" } ]
[ { "docid": "23244a7be797c01dcdca24491a574ab4", "text": "Health care professionals are now faced with a growing number of patients from different ethnic groups, and from different socioeconomical backgrounds. In the field of neuropsychology there is an increasing need of reliable and culturally fair assessment measures. Spanish is the official language in more than 20 countries and the second most spoken language in the world. The purpose of this research was to develop and standardize the neuropsychological battery NEUROPSI ATTENTION AND MEMORY, designed to assess orientation, attention and concentration, executive functions, working memory and immediate and delayed verbal and visual memory. The developmental sequences of attention and memory as well as the educational effects were analyzed in a sample of 521 monolingual Spanish Speaking subjects, aged 6 to 85 years. Educational level ranged from 0 to 22 years of education. The consideration of the developmental sequence, and the effects of education, can improve the sensitivity and specificity of neuropsychological measures.", "title": "" }, { "docid": "2476c8b7f6fe148ab20c29e7f59f5b23", "text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.", "title": "" }, { "docid": "8bed049baa03a11867b0205e16402d0e", "text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.", "title": "" }, { "docid": "41fa9841bcda62c2df3893dde53f874e", "text": "In clustering analysis, data attributes may have different contributions to the detection of various clusters. To solve this problem, the subspace clustering technique has been developed, which aims at grouping the data objects into clusters based on the subsets of attributes rather than the entire data space. However, the most existing subspace clustering methods are only applicable to either numerical or categorical data, but not both. This paper, therefore, studies the soft subspace clustering of data with both of the numerical and categorical attributes (also simply called mixed data for short). Specifically, an attribute-weighted clustering model based on the definition of object-cluster similarity is presented. Accordingly, a unified weighting scheme for the numerical and categorical attributes is proposed, which quantifies the attribute-to-cluster contribution by taking into account both of intercluster difference and intracluster similarity. Moreover, a rival penalized competitive learning mechanism is further introduced into the proposed soft subspace clustering algorithm so that the subspace cluster structure as well as the most appropriate number of clusters can be learned simultaneously in a single learning paradigm. In addition, an initialization-oriented method is also presented, which can effectively improve the stability and accuracy of $k$ -means-type clustering methods on numerical, categorical, and mixed data. The experimental results on different benchmark data sets show the efficacy of the proposed approach.", "title": "" }, { "docid": "b3450073ad3d6f2271d6a56fccdc110a", "text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.", "title": "" }, { "docid": "71a65ff432ae4b53085ca5c923c29a95", "text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.", "title": "" }, { "docid": "60b6c29a06e523701b3237ba4f854d75", "text": "Biosorption has been defined as the property of certain biomolecules (or types of biomass) to bind and concentrate selected ions or other molecules from aqueous solutions. As opposed to a much more complex phenomenon of bioaccumulation based on active metabolic transport, biosorption by dead biomass (or by some molecules and/or their active groups) is passive and based mainly on the \"affinity\" between the (bio-)sorbent and sorbate. A personal overview of the field and its origins is given here, focusing on R&D reasoning and know-how that is not normally published in the scientific literature. While biosorption of heavy metals has become a popular environmentally driven research topic, it represents only one particular type of a concentration-removal aspect of the sorption process. The methodology of studying biosorption is based on an interdisciplinary approach to it, whereby the phenomenon can be studied, examined and analyzed from different angles and perspectives-by chemists, (micro-)biologists as well as (process) engineers. A pragmatic science approach directs us towards the ultimate application of the phenomenon when reasonably well understood. Considering the variety of parameters affecting the biosorption performance, we have to avoid the endless empirical and, indeed, alchemistic approach to elucidating and optimizing the phenomenon-and this is where the power of computers becomes most useful. This is all still in the domain of science-or \"directed curiosity\". When the knowledge of biosorption is adequate, it is time to use it-applications of certain types of biosorption are on the horizon, inviting the \"new technology\" enterprise ventures and presenting new and quite different challenges.", "title": "" }, { "docid": "1b2515c8d20593d7b4446d695e28389f", "text": "Based on microwave C-sections, rat-race coupler is designed to have a dual-band characteristic and a miniaturized area. The C-section together with two transmission line sections attached to both of its ends is synthesized to realize a phase change of 90° at the first frequency, and 270° at the second passband. The equivalence is established by the transmission line theory, and transcendental equations are derived to determine its structure parameters. Two circuits are realized in this presentation; one is designed at 2.45/5.2 GHz and the other at 2.45/5.8 GHz. The latter circuit occupies only 31% of the area of a conventional hybrid ring at the first band. It is believed that this circuit has the best size reduction for microstrip dual-band rat-race couplers in open literature. The measured results show good agreement with simulation responses.", "title": "" }, { "docid": "1c7131fcb031497b2c1487f9b25d8d4e", "text": "Biases in information processing undoubtedly play an important role in the maintenance of emotion and emotional disorders. In an attentional cueing paradigm, threat words and angry faces had no advantage over positive or neutral words (or faces) in attracting attention to their own location, even for people who were highly state-anxious. In contrast, the presence of threatening cues (words and faces) had a strong impact on the disengagement of attention. When a threat cue was presented and a target subsequently presented in another location, high state-anxious individuals took longer to detect the target relative to when either a positive or a neutral cue was presented. It is concluded that threat-related stimuli affect attentional dwell time and the disengage component of attention, leaving the question of whether threat stimuli affect the shift component of attention open to debate.", "title": "" }, { "docid": "ee397703a8d5a751c7fd7c76f92ebd73", "text": "Autografting of dopamine-producing adrenal medullary tissue to the striatal region of the brain is now being attempted in patients with Parkinson's disease. Since the success of this neurosurgical approach to dopamine-replacement therapy may depend on the selection of the most appropriate subregion of the striatum for implantation, we examined the pattern and degree of dopamine loss in striatum obtained at autopsy from eight patients with idiopathic Parkinson's disease. We found that in the putamen there was a nearly complete depletion of dopamine in all subdivisions, with the greatest reduction in the caudal portions (less than 1 percent of the dopamine remaining). In the caudate nucleus, the only subdivision with severe dopamine reduction was the most dorsal rostral part (4 percent of the dopamine remaining); the other subdivisions still had substantial levels of dopamine (up to approximately 40 percent of control levels). We propose that the motor deficits that are a constant and characteristic feature of idiopathic Parkinson's disease are for the most part a consequence of dopamine loss in the putamen, and that the dopamine-related caudate deficits (in \"higher\" cognitive functions) are, if present, less marked or restricted to discrete functions only. We conclude that the putamen--particularly its caudal portions--may be the most appropriate site for intrastriatal application of dopamine-producing autografts in patients with idiopathic Parkinson's disease.", "title": "" }, { "docid": "e584e7e0c96bc78bc2b2166d1af272a6", "text": "In this paper we investigate the problem of inducing a distribution over three-dimensional structures given two-dimensional views of multiple objects taken from unknown viewpoints. Our approach called \"projective generative adversarial networks\" (PrGANs) trains a deep generative model of 3D shapes whose projections match the distributions of the input 2D views. The addition of a projection module allows us to infer the underlying 3D shape distribution without using any 3D, viewpoint information, or annotation during the learning phase. We show that our approach produces 3D shapes of comparable quality to GANs trained on 3D data for a number of shape categories including chairs, airplanes, and cars. Experiments also show that the disentangled representation of 2D shapes into geometry and viewpoint leads to a good generative model of 2D shapes. The key advantage is that our model allows us to predict 3D, viewpoint, and generate novel views from an input image in a completely unsupervised manner.", "title": "" }, { "docid": "01457bfad1b14fdf3702a0ff798faf9e", "text": "Jubjitt P, Tingsabhat J, Chaiwatcharaporn C. New PositionSpecific Movement Ability Test (PoSMAT) Protocol Suite and Norms for Talent Identification, Selection, and Personalized Training for Soccer Players. JEPonline 2017;20(1):59-82. The purpose of this study was to develop a soccer position-specific movement ability test (PoSMAT) Protocol Suite and establish their norms. Subjects consisted of six different position soccer players per team from six Thai Premier League 2013 teams. The first step was to identify position-specific high speed running/sprint speed with corresponding distances covered by TRAK PERFORMANCE software. The second step was to develop the PoSMAT Protocol Suite by incorporating position-specific movement patterns and speed-distance analyses from the first step into three test protocols for ATTK Attacker, CMCD Central Midfielder and Central Defender, and WMFB Wide Midfielder and Full Back with respect to the soccer players’ abilities in speed, agility, and cardiovascular endurance. The findings indicate that the PoSMAT Protocol Suite was statistically valid, objective, reliable, and discriminating. Also, PoSMAT norms from 360 Thai elite soccer players were established. Thus, the PoSMAT Protocol Suite and norms can be used for position-specific talent identification, selection for proper playing position placement, and individualized training to enhance the players’ soccer career.", "title": "" }, { "docid": "49cda71b86a3a6b374616a9013816b38", "text": "Discriminative localization is essential for fine-grained image classification task, which devotes to recognizing hundreds of subcategories in the same basic-level category. Reflecting on discriminative regions of objects, key differences among different subcategories are subtle and local. Existing methods generally adopt a two-stage learning framework: The first stage is to localize the discriminative regions of objects, and the second is to encode the discriminative features for training classifiers. However, these methods generally have two limitations: (1) Separation of the two-stage learning is time-consuming. (2) Dependence on object and parts annotations for discriminative localization learning leads to heavily labor-consuming labeling. It is highly challenging to address these two important limitations simultaneously. Existing methods only focus on one of them. Therefore, this paper proposes the discriminative localization approach via saliency-guided Faster R-CNN to address the above two limitations at the same time, and our main novelties and advantages are: (1) End-to-end network based on Faster R-CNN is designed to simultaneously localize discriminative regions and encode discriminative features, which accelerates classification speed. (2) Saliency-guided localization learning is proposed to localize the discriminative region automatically, avoiding labor-consuming labeling. Both are jointly employed to simultaneously accelerate classification speed and eliminate dependence on object and parts annotations. Comparing with the state-of-the-art methods on the widely-used CUB-200-2011 dataset, our approach achieves both the best classification accuracy and efficiency.", "title": "" }, { "docid": "77502699d31b0bb13f6070756054fc2d", "text": "This thesis evaluates the integrated information theory (IIT) by looking at how it may answer some central problems of consciousness that the author thinks any theory of consciousness should be able to explain. The problems concerned are the mind-body problem, the hard problem, the explanatory gap, the binding problem, and the problem of objectively detecting consciousness. The IIT is a computational theory of consciousness thought to explain the rise of consciousness. First the mongrel term consciousness is defined to give a clear idea of what is meant by consciousness in this thesis; followed by a presentation of the IIT, its origin, main ideas, and some implications of the theory. Thereafter the problems of consciousness will be presented, and the explanation the IIT gives will be investigated. In the discussion, some not perviously—in the thesis—discussed issues regarding the theory will be lifted. The author finds the IIT to hold explanations to each of the problems discussed. Whether the explanations are satisfying is questionable. Keywords: integrated information theory, phenomenal consciousness, subjective experience, mind-body, the hard problem, binding, testing 
 AN EVALUATION OF THE IIT !4 Table of Content Introduction 5 Defining Consciousness 6 Introduction to the Integrated Information Theory 8 Historical Background 8 The Approach 9 The Core of the IIT 9 Axioms 11 Postulates 13 The Conscious Mechanism of the IIT 15 Some Key Terms of the IIT 17 The Central Identity of the IIT 19 Some Implications of the IIT 20 Introduction to the Problems of Consciousness 25 The Mind-Body Problem 25 The Hard Problem 27 The Explanatory Gap 28 The Problem With the Problems Above 28 The Binding Problem 30 The Problem of Objectively Detecting Consciousness 31 Evaluation of the IIT Against the Problems of Consciousness 37 The Mind-Body Problem vs. the IIT 38 The Hard Problem vs. the IIT 40 The Explanatory Gap vs. the IIT 42 The Binding Problem vs. the IIT 43 The Problem of Objectively Detecting Consciousness 45 Discussion 50 Conclusion 53 References 54 AN EVALUATION OF THE IIT !5 Introduction Intuitively we like to believe that things which act and behave similarly to ourselves are conscious, things that interact with us on our terms, mimic our facial and bodily expressions, and those that we feel empathy for. But what about things that are superficially different from us, such as other animals and insects, bacteria, groups of people, humanoid robots, the Internet, self-driving cars, smartphones, or grey boxes which show no signs of interaction with their environment? Is it possible that intuition and theory of mind (ToM) may be misleading; that one wrongly associate consciousness with intelligence, human-like behaviour, and ability to react on stimuli? Perhaps we attribute consciousness to things that are not conscious, and that we miss to attribute it to things that really have vivid experiences. To address this question, many theories have been proposed that aim at explaining the emergence of consciousness and to give us tools to identify wherever consciousness may occur. The integrated information theory (IIT) (Tononi, 2004), is one of them. It originates in the dynamic core theory (Tononi & Edelman, 1998) and claims that consciousness is the same as integrated information. While some theories of consciousness only attempt to explain consciousness in neurobiological systems, the IIT is assumed to apply to non-biological systems. Parthemore and Whitby (2014) raise the concern that one may be tempted to reduce consciousness to some quantity X, where X might be e.g. integrated information, neural oscillations (the 40 Hz theory, Crick & Koch, 1990), etc. A system that models one of those theories may prematurely be believed to be conscious argue Parthemore and Whitby (2014). This tendency has been noted among researchers of machine consciousness, of some who have claimed their systems to have achieved at least minimal consciousness (Gamez, 2008a). The aim of this thesis is to take a closer look at the IIT and see how it responds to some of the major problems of consciousness. The focus will be on the mechanisms which AN EVALUATION OF THE IIT !6 the IIT hypothesises gives rise to conscious experience (Oizumi, Albantakis, & Tononi, 2014a), and how it corresponds to those identified by cognitive neurosciences. This thesis begins by offering a working definition of consciousness; that gives a starting point for what we are dealing with. Then it continues with an introduction to the IIT, which is the main focus of this thesis. I have tried to describe the theory in my own words, where some of more complex details not necessary for my argument are left out. I have taken some liberties in adapting the terminology to fit better with what I find elsewhere in cognitive neurosciences and consciousness science avoiding distorting the theory. Thereafter follows the problems of consciousness, which a theory of consciousness, such as IIT, should be able to explain. The problems explored in this thesis are the mind-body problem, the hard problem, the explanatory gap, the binding problem and the problem of objectively detecting consciousness. Each problem is used to evaluate the theory by looking at what explanations the theory is providing. Defining Consciousness What is this thing that is called consciousness and what does it mean to be conscious? Science doesn’t seem to provide with one clear definition of consciousness (Cotterill, 2003; Gardelle & Kouider, 2009; Revonsuo, 2010). When lay people talk about consciousness and being conscious they commonly refer to being attentive and aware and having intentions (Malle, 2009). Both John Searle (1990) and Giulio Tononi (Tononi, 2008, 2012a; Oizumi et al., 2014a) refer to consciousness as the thing that disappears when falling into dreamless sleep, or otherwise become unconscious, and reappears when we wake up or begin to dream. The problem with defining the term consciousness is that it seems to point to many different kinds of phenomena (Block, 1995). In an attempt to point it out and pin it down, the AN EVALUATION OF THE IIT !7 usage of the term needs to be narrowed down to fit the intended purpose. Cognition and neuroscientists alike commonly use terms such as non-conscious, unconscious, awake state, lucid dreaming, etc. which all refer to the subjective experience, but of different degrees, levels, and states (Revonsuo, 2009). Commonly used in discussions regarding consciousness are also terms such as reflective consciousness, self-consciousness, access consciousness, and functional consciousness. Those terms have little to do with the subjective experience per se, at best they describe some of the content of an experience, but mostly refer to observed behaviour (Block, 1995). It seems that researchers of artificial machine consciousness often steer away from the subjective experience. Instead, they focus on the use, the functions, and the expressions of consciousness, as it may be perceived by a third person (Gamez, 2008a). In this thesis, the term consciousness is used for the phenomenon of subjective experience, per se. It is what e.g. differs the awake state from dreamless sleep. It is what differs one’s own conscious thought processes from a regular computer’s nonconscious information processing, or one’s mindful thought from unconscious sensory-motoric control and automatic responses. It is what is lost during anaesthesia and epileptic seizures. Without consciousness, there wouldn’t be “something it is like to be” (Nagel, 1974, p. 436) and there would be no one there to experience the world (Tononi, 2008). Without it we would not experience anything. We would not even regard ourselves to be alive. It is the felt raw experience, even before it is attended to, considered and possible to report, i.e. what Block (1995) refers to as phenomenal consciousness. This is also often the starting point of cognitive and neurological theories of consciousness, which try to explain how experience emerge within the brain by exploring the differences between conscious and nonconscious states and processes. AN EVALUATION OF THE IIT !8 Introduction to the Integrated Information Theory Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Φ is its symbol. A complex is where Φ reaches its maximum, and therein lives one consciousness—a single entity of experience. (Tononi, 2012b, p. 172) Historical Background The integrated information theory originates in the collected ideas of Tononi, Sporns, and Edelman (1992, 1994). In their early collaborative work, they developed a reentering model of visual binding which considered cortico-cortical connections as the basis for integration (Tononi et al., 1992). Two years later they presented a measure hypothesised to describe the neural complexity of functional integration in the brain (Tononi et al., 1994). The ideas of the reentering model and neural complexity measure developed into the more known dynamic core hypothesis (DCH) of the neural substrate of consciousness (Tononi & Edelman, 1998). The thalamocortical pathways played the foundation of sensory modality integration. In the DCH, a measure of integration based on entropy was introduced, which later became Φ, the measurement of integrated information (Tononi & Sporns, 2003). This laid the foundation for the information integration theory of consciousness (Tononi, 2004). The IIT is under constant development and has since it first was presented undergone three major revisions. The latest, at the time of writing, is referred to as version 3.0 (Oizumi et al., 2014a), which this thesis mostly relies on. The basic philosophical and theoretical assumptions have been preserved throughout the development of the theory. Some of the terminology and mathematics have changed between the versions (Oizumi, Amari, Yanagawa, Fujii, & Tsuchiya, 2015). Axioms and p", "title": "" }, { "docid": "a3b3380940613a5fb704727e41e9907a", "text": "Stackelberg Security Games (SSG) have been widely applied for solving real-world security problems - with a significant research emphasis on modeling attackers' behaviors to handle their bounded rationality. However, access to real-world data (used for learning an accurate behavioral model) is often limited, leading to uncertainty in attacker's behaviors while modeling. This paper therefore focuses on addressing behavioral uncertainty in SSG with the following main contributions: 1) we present a new uncertainty game model that integrates uncertainty intervals into a behavioral model to capture behavioral uncertainty, and 2) based on this game model, we propose a novel robust algorithm that approximately computes the defender's optimal strategy in the worst-case scenario of uncertainty. We show that our algorithm guarantees an additive bound on its solution quality.", "title": "" }, { "docid": "edf744b475ec90a123685b4f178506c0", "text": "Web servers are ubiquitous, remotely accessible, and often misconfigured. In addition, custom web-based applications may introduce vulnerabilities that are overlooked even by the most security-conscious server administrators. Consequently, web servers are a popular target for hackers. To mitigate the security exposure associated with web servers, intrusion detection systems are deployed to analyze and screen incoming requests. The goal is to perform early detection of malicious activity and possibly prevent more serious damage to the protected site. Even though intrusion detection is critical for the security of web servers, the intrusion detection systems available today only perform very simple analyses and are often vulnerable to simple evasion techniques. In addition, most systems do not provide sophisticated attack languages that allow a system administrator to specify custom, complex attack scenarios to be detected. This paper presents WebSTAT, an intrusion detection system that analyzes web requests looking for evidence of malicious behavior. The system is novel in several ways. First of all, it provides a sophisticated language to describe multistep attacks in terms of states and transitions. In addition, the modular nature of the system supports the integrated analysis of network traffic sent to the server host, operating system-level audit data produced by the server host, and the access logs produced by the web server. By correlating different streams of events, it is possible to achieve more effective detection of web-based attacks.", "title": "" }, { "docid": "1272563e64ca327aba1be96f2e045c30", "text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.", "title": "" }, { "docid": "d6a40f99a86b55584c52326240fc4170", "text": "In order to avoid wheel slippage or mechanical damage during the mobile robot navigation, it is necessary to smoothly change driving velocity or direction of the mobile robot. This means that dynamic constraints of the mobile robot should be considered in the design of path tracking algorithm. In the study, a path tracking problem is formulated as following a virtual target vehicle which is assumed to move exactly along the path with specified velocity. The driving velocity control law is designed basing on bang-bang control considering the acceleration bounds of driving wheels. The steering control law is designed by combining the bang-bang control with an intermediate path called the landing curve which guides the robot to smoothly land on the virtual target’s tangential line. The curvature and convergence analyses provide sufficient stability conditions for the proposed path tracking controller. A series of path tracking simulations and experiments conducted for a two-wheel driven mobile robot show the validity of the proposed algorithm.", "title": "" }, { "docid": "fdd63e1c0027f21af7dea9db9e084b26", "text": "To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORBSLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps increases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people.", "title": "" }, { "docid": "1fe0bfec531eac34bd81a11b3d5cf1ab", "text": "We demonstrate an advanced ReRAM based analog artificial synapse for neuromorphic systems. Nitrogen doped TiN/PCMO based artificial synapse is proposed to improve the performance and reliability of the neuromorphic systems by using simple identical spikes. For the first time, we develop fully unsupervised learning with proposed analog synapses which is illustrated with the help of auditory and electroencephalography (EEG) applications.", "title": "" } ]
scidocsrr
99ca4f824ee164596049ffe436fb6baf
A next generation knowledge management system architecture
[ { "docid": "d22390e43aa4525d810e0de7da075bbf", "text": "information, including knowledge management and e-business applications. Next-generation knowledge management systems will likely rely on conceptual models in the form of ontologies to precisely define the meaning of various symbols. For example, FRODO (a Framework for Distributed Organizational Memories) uses ontologies for knowledge description in organizational memories,1 CoMMA (Corporate Memory Management through Agents) investigates agent technologies for maintaining ontology-based knowledge management systems,2 and Steffen Staab and his colleagues have discussed the methodologies and processes for building ontology-based systems.3 Here we present an integrated enterprise-knowledge management architecture for implementing an ontology-based knowledge management system (OKMS). We focus on two critical issues related to working with ontologies in real-world enterprise applications. First, we realize that imposing a single ontology on the enterprise is difficult if not impossible. Because organizations must devise multiple ontologies and thus require integration mechanisms, we consider means for combining distributed and heterogeneous ontologies using mappings. Additionally, a system’s ontology often must reflect changes in system requirements and focus, so we developed guidelines and an approach for managing the difficult and complex ontology-evolution process.", "title": "" } ]
[ { "docid": "75dfe6e25e7c542d13d1712112da4712", "text": "Obtaining data in the real world is subject to imperfections and the appearance of noise is a common consequence of such flaws. In classification, class noise will deteriorate the performance of a classifier, as it may severely mislead the model building. Among the strategies emerged to deal with class noise, the most popular is that of filtering. However, instance filtering can be harmful as it may eliminate more examples than necessary or produce loss of information. An ideal option would be relabeling the noisy instances, avoiding losing data, but instance correcting is harder to achieve and may lead to wrong information being introduced in the dataset. For this reason, we advance a new proposal based on an ensemble of noise filters with the goal not only of accurately filtering the mislabeled instances, but also correcting them when possible. A noise score is also applied to support the filtering and relabeling process. The proposal, named CNC-NOS (Class Noise Cleaner with Noise Scoring), is compared against state-of-the-art noise filters and correctors, showing that it is able to deliver a quality training instance set that overcomes the limitations of such techniques, both in terms of classification accuracy and properly treated instances.", "title": "" }, { "docid": "cefa0a3c3a80fa0a170538abdb3f7e46", "text": "This tutorial introduces the basics of emerging nonvolatile memory (NVM) technologies including spin-transfer-torque magnetic random access memory (STTMRAM), phase-change random access memory (PCRAM), and resistive random access memory (RRAM). Emerging NVM cell characteristics are summarized, and device-level engineering trends are discussed. Emerging NVM array architectures are introduced, including the one-transistor-one-resistor (1T1R) array and the cross-point array with selectors. Design challenges such as scaling the write current and minimizing the sneak path current in cross-point array are analyzed. Recent progress on megabit-to gigabit-level prototype chip demonstrations is summarized. Finally, the prospective applications of emerging NVM are discussed, ranging from the last-level cache to the storage-class memory in the memory hierarchy. Topics of three-dimensional (3D) integration and radiation-hard NVM are discussed. Novel applications beyond the conventional memory applications are also surveyed, including physical unclonable function for hardware security, reconfigurable routing switch for field-programmable gate array (FPGA), logic-in-memory and nonvolatile cache/register/flip-flop for nonvolatile processor, and synaptic device for neuro-inspired computing.", "title": "" }, { "docid": "3608939d057889c2731b12194ef28ea6", "text": "Permanent magnets with rare earth materials are widely used in interior permanent magnet synchronous motors (IPMSMs) in Hybrid Electric Vehicles (HEVs). The recent price rise of rare earth materials has become a serious concern. A Switched Reluctance Motor (SRM) is one of the candidates for HEV rare-earth-free-motors. An SRM has been developed with dimensions, maximum torque, operating area, and maximum efficiency that all compete with the IPMSM. The efficiency map of the SRM is different from that of the IPMSM; thus, direct comparison has been rather difficult. In this paper, a comparison of energy consumption between the SRM and the IPMSM using four standard driving schedules is carried out. In HWFET and NEDC driving schedules, the SRM is found to have better efficiency because its efficiency is high at the high-rotational-speed region.", "title": "" }, { "docid": "836818987ad40fd67d43fbc26f4bdc0f", "text": "Although psilocybin has been used for centuries for religious purposes, little is known scientifically about its acute and persisting effects. This double-blind study evaluated the acute and longer-term psychological effects of a high dose of psilocybin relative to a comparison compound administered under comfortable, supportive conditions. The participants were hallucinogen-naïve adults reporting regular participation in religious or spiritual activities. Two or three sessions were conducted at 2-month intervals. Thirty volunteers received orally administered psilocybin (30 mg/70 kg) and methylphenidate hydrochloride (40 mg/70 kg) in counterbalanced order. To obscure the study design, six additional volunteers received methylphenidate in the first two sessions and unblinded psilocybin in a third session. The 8-h sessions were conducted individually. Volunteers were encouraged to close their eyes and direct their attention inward. Study monitors rated volunteers’ behavior during sessions. Volunteers completed questionnaires assessing drug effects and mystical experience immediately after and 2 months after sessions. Community observers rated changes in the volunteer’s attitudes and behavior. Psilocybin produced a range of acute perceptual changes, subjective experiences, and labile moods including anxiety. Psilocybin also increased measures of mystical experience. At 2 months, the volunteers rated the psilocybin experience as having substantial personal meaning and spiritual significance and attributed to the experience sustained positive changes in attitudes and behavior consistent with changes rated by community observers. When administered under supportive conditions, psilocybin occasioned experiences similar to spontaneously occurring mystical experiences. The ability to occasion such experiences prospectively will allow rigorous scientific investigations of their causes and consequences.", "title": "" }, { "docid": "5fc02317117c3068d1409a42b025b018", "text": "Explaining the causes of infeasibility of Boolean formulas has practical applications in numerous fields, such as artificial intelligence (repairing inconsistent knowledge bases), formal verification (abstraction refinement and unbounded model checking), and electronic design (diagnosing and correcting infeasibility). Minimal unsatisfiable subformulas (MUSes) provide useful insights into the causes of infeasibility. An unsatisfiable formula often has many MUSes. Based on the application domain, however, MUSes with specific properties might be of interest. In this paper, we tackle the problem of finding a smallest-cardinality MUS (SMUS) of a given formula. An SMUS provides a succinct explanation of infeasibility and is valuable for applications that are heavily affected by the size of the explanation. We present (1) a baseline algorithm for finding an SMUS, founded on earlier work for finding all MUSes, and (2) a new branch-and-bound algorithm called Digger that computes a strong lower bound on the size of an SMUS and splits the problem into more tractable subformulas in a recursive search tree. Using two benchmark suites, we experimentally compare Digger to the baseline algorithm and to an existing incomplete genetic algorithm approach. Digger is shown to be faster in nearly all cases. It is also able to solve far more instances within a given runtime limit than either of the other approaches.", "title": "" }, { "docid": "80c745ee8535d9d53819ced4ad8f996d", "text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).", "title": "" }, { "docid": "809d03fd69aebc7573463756a535de18", "text": "We describe Venture, an interactive virtual machine for probabilistic programming that aims to be sufficiently expressive, extensible, and efficient for general-purpose use. Like Church, probabilistic models and inference problems in Venture are specified via a Turing-complete, higher-order probabilistic language descended from Lisp. Unlike Church, Venture also provides a compositional language for custom inference strategies, assembled from scalable implementations of several exact and approximate techniques. Venture is thus applicable to problems involving widely varying model families, dataset sizes and runtime/accuracy constraints. We also describe four key aspects of Venture’s implementation that build on ideas from probabilistic graphical models. First, we describe the stochastic procedure interface (SPI) that specifies and encapsulates primitive random variables, analogously to conditional probability tables in a Bayesian network. The SPI supports custom control flow, higher-order probabilistic procedures, partially exchangeable sequences and “likelihood-free” stochastic simulators, all with custom proposals. It also supports the integration of external models that dynamically create, destroy and perform inference over latent variables hidden from Venture. Second, we describe probabilistic execution traces (PETs), which represent execution histories of Venture programs. Like Bayesian networks, PETs capture conditional dependencies, but PETs also represent existential dependencies and exchangeable coupling. Third, we describe partitions of execution histories called scaffolds that can be efficiently constructed from PETs and that factor global inference problems into coherent sub-problems. Finally, we describe a family of stochastic regeneration algorithms for efficiently modifying PET fragments contained within scaffolds without visiting conditionally independent random choices. Stochastic regeneration insulates inference algorithms from the complexities introduced by changes in execution structure, with runtime that scales linearly in cases where previous approaches often scaled quadratically and were therefore impractical. We show how to use stochastic regeneration and the SPI to implement general-purpose inference strategies such as Metropolis-Hastings, Gibbs sampling, and blocked proposals based on hybrids with both particle Markov chain Monte Carlo and mean-field variational inference techniques.", "title": "" }, { "docid": "c62742c65b105a83fa756af9b1a45a37", "text": "This article treats numerical methods for tracking an implicitly defined path. The numerical precision required to successfully track such a path is difficult to predict a priori, and indeed, it may change dramatically through the course of the path. In current practice, one must either choose a conservatively large numerical precision at the outset or re-run paths multiple times in successively higher precision until success is achieved. To avoid unnecessary computational cost, it would be preferable to adaptively adjust the precision as the tracking proceeds in response to the local conditioning of the path. We present an algorithm that can be set to either reactively adjust precision in response to step failure or proactively set the precision using error estimates. We then test the relative merits of reactive and proactive adaptation on several examples arising as homotopies for solving systems of polynomial equations.", "title": "" }, { "docid": "3cd2bfe8257f2212513ecd614f6b9fef", "text": "Carbon aerogels demonstrate wide applications for their ultralow density, rich porosity, and multifunctionalities. Their compressive elasticity has been achieved by different carbons. However, reversibly high stretchability of neat carbon aerogels is still a great challenge owing to their extremely dilute brittle interconnections and poorly ductile cells. Here we report highly stretchable neat carbon aerogels with a retractable 200% elongation through hierarchical synergistic assembly. The hierarchical buckled structures and synergistic reinforcement between graphene and carbon nanotubes enable a temperature-invariable, recoverable stretching elasticity with small energy dissipation (~0.1, 100% strain) and high fatigue resistance more than 106 cycles. The ultralight carbon aerogels with both stretchability and compressibility were designed as strain sensors for logic identification of sophisticated shape conversions. Our methodology paves the way to highly stretchable carbon and neat inorganic materials with extensive applications in aerospace, smart robots, and wearable devices. Improved compressive elasticity was lately demonstrated for carbon aerogels but the problem of reversible stretchability remained a challenge. Here the authors use a hierarchical structure design and synergistic effects between carbon nanotubes and graphene to achieve high stretchability in carbon aerogels.", "title": "" }, { "docid": "0d28ddef1fa86942da679aec23dff890", "text": "Electronic patient records remain a rather unexplored, but potentially rich data source for discovering correlations between diseases. We describe a general approach for gathering phenotypic descriptions of patients from medical records in a systematic and non-cohort dependent manner. By extracting phenotype information from the free-text in such records we demonstrate that we can extend the information contained in the structured record data, and use it for producing fine-grained patient stratification and disease co-occurrence statistics. The approach uses a dictionary based on the International Classification of Disease ontology and is therefore in principle language independent. As a use case we show how records from a Danish psychiatric hospital lead to the identification of disease correlations, which subsequently can be mapped to systems biology frameworks.", "title": "" }, { "docid": "dd9e89b7e0c70fcc542a185d6bd98763", "text": "This study describes metaphorical conceptualizations of the foreign exchange market held by market participants and examines how these metaphors socially construct the financial market. Findings are based on 55 semi-structured interviews with senior foreign exchange experts at banks and at financial news providers in Europe. We analysed interview transcripts by metaphor analysis, a method based on cognitive linguistics. Results indicate that market participants' understanding of financial markets revolves around seven metaphors, namely the market as a bazaar, as a machine, as gambling, as sports, as war, as a living being and as an ocean. Each of these metaphors highlights and conceals certain aspects of the foreign exchange market and entails a different set of implications on crucial market dimensions, such as the role of other market participants and market predictability. A correspondence analysis supports our assumption that metaphorical thinking corresponds with implicit assumptions about market predictability. A comparison of deliberately generated and implicitly used metaphors reveals notable differences. In particular, implicit metaphors are predominantly organic rather than mechanical. In contrast to academic models, interactive and organic metaphors, and not the machine metaphor, dominate the market accounts of participants.", "title": "" }, { "docid": "3ad25dabe3b740a91b939a344143ea9e", "text": "Recently, much attention in research and practice has been devoted to the topic of IT consumerization, referring to the adoption of private consumer IT in the workplace. However, research lacks an analysis of possible antecedents of the trend on an individual level. To close this gap, we derive a theoretical model for IT consumerization behavior based on the theory of planned behavior and perform a quantitative analysis. Our investigation shows that it is foremost determined by normative pressures, specifically the behavior of friends, co-workers and direct supervisors. In addition, behavioral beliefs and control beliefs were found to affect the intention to use non-corporate IT. With respect to the former, we found expected performance improvements and an increase in ease of use to be two of the key determinants. As for the latter, especially monetary costs and installation knowledge were correlated with IT consumerization intention.", "title": "" }, { "docid": "0d0d11c1e340e67939cfba0cde4783ed", "text": "Recent research effort in poem composition has focused on the use of automatic language generation to produce a polished poem. A less explored question is how effectively a computer can serve as an interactive assistant to a poet. For this purpose, we built a web application that combines rich linguistic knowledge from classical Chinese philology with statistical natural language processing techniques. The application assists users in composing a ‘couplet’—a pair of lines in a traditional Chinese poem—by making suggestions for the next and corresponding characters. A couplet must meet a complicated set of requirements on phonology, syntax, and parallelism, which are challenging for an amateur poet to master. The application checks conformance to these requirements and makes suggestions for characters based on lexical, syntactic, and semantic properties. A distinguishing feature of the application is its extensive use of linguistic knowledge, enabling it to inform users of specific phonological principles in detail, and to explicitly model semantic parallelism, an essential characteristic of Chinese poetry. We evaluate the quality of poems composed solely with characters suggested by the application, and the coverage of its character suggestions. .................................................................................................................................................................................", "title": "" }, { "docid": "edba5ee93ead361ac4398c0f06d3ba06", "text": "We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark.", "title": "" }, { "docid": "22bbeceff175ee2e9a462b753ce24103", "text": "BACKGROUND\nEUS-guided FNA can help diagnose and differentiate between various pancreatic and other lesions.The aim of this study was to compare approaches among involved/relevant physicians to the controversies surrounding the use of FNA in EUS.\n\n\nMETHODS\nA five-case survey was developed, piloted, and validated. It was collected from a total of 101 physicians, who were all either gastroenterologists (GIs), surgeons or oncologists. The survey compared the management strategies chosen by members of these relevant disciplines regarding EUS-guided FNA.\n\n\nRESULTS\nFor CT operable T2NOM0 pancreatic tumors the research demonstrated variance as to whether to undertake EUS-guided FNA, at p < 0.05. For inoperable pancreatic tumors 66.7% of oncologists, 62.2% of surgeons and 79.1% of GIs opted for FNA (p < 0.05). For cystic pancreatic lesions, oncologists were more likely to send patients to surgery without FNA. For stable simple pancreatic cysts (23 mm), most physicians (66.67%) did not recommend FNA. For a submucosal gastric 19 mm lesion, 63.2% of surgeons recommended FNA, vs. 90.0% of oncologists (p < 0.05).\n\n\nCONCLUSIONS\nControversies as to ideal application of EUS-FNA persist. Optimal guidelines should reflect the needs and concerns of the multidisciplinary team who treat patients who need EUS-FNA. Multi-specialty meetings assembled to manage patients with these disorders may be enlightening and may help develop consensus.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "57bebb90000790a1d76a400f69d5736d", "text": "In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.", "title": "" }, { "docid": "844a39889bd671a8b9abe085b2e0a982", "text": "1 One may wonder, ...] how complex organisms evolve at all. They seem to have so many genes, so many multiple or pleiotropic eeects of any one gene, so many possibilities for lethal mutations in early development, and all sorts of problems due to their long development. Abstract: The problem of complex adaptations is studied in two largely disconnected research traditions: evolutionary biology and evolutionary computer science. This paper summarizes the results from both areas and compares their implications. In evolutionary computer science it was found that the Darwinian process of mutation, recombination and selection is not universally eeective in improving complex systems like computer programs or chip designs. For adaptation to occur, these systems must possess \"evolvability\", i.e. the ability of random variations to sometimes produce improvement. It was found that evolvability critically depends on the way genetic variation maps onto phenotypic variation, an issue known as the representation problem. The genotype-phenotype map determines the variability of characters, which is the propensity to vary. Variability needs to be distinguished from variation, which are the actually realized diierences between individuals. The genotype-phenotype map is the common theme underlying such varied biological phenomena as genetic canalization, developmental constraints, biological versatility , developmental dissociability, morphological integration, and many more. For evolutionary biology the representation problem has important implications: how is it that extant species acquired a genotype-phenotype map which allows improvement by mutation and selection? Is the genotype-phenotype map able to change in evolution? What are the selective forces, if any, that shape the genotype-phenotype map? We propose that the genotype-phenotype map can evolve by two main routes: epistatic mutations, or the creation of new genes. A common result for organismic design is modularity. By modularity we mean a genotype-phenotype map in which there are few pleiotropic eeects among characters serving diierent functions, with pleiotropic eeects falling mainly among characters that are part of a single functional complex. Such a design is expected to improve evolvability by limiting the interference between the adaptation of diierent functions. Several population genetic models are reviewed that are intended to explain the evolutionary origin of a modular design. While our current knowledge is insuucient to assess the plausibil-ity of these models, they form the beginning of a framework for understanding the evolution of the genotype-phenotype map.", "title": "" }, { "docid": "67e008db2a218b4e307003c919a32a8a", "text": "Relay deployment in Orthogonal Frequency Division Multipl e Access (OFDMA) based cellular networks helps in coverage extension and/or capacity improvement. To quantify capacity improvement, blocking probability of voice traffic is typically calculated using Erlang B formula. This calculation is based on the assumption that all users require same amount of resourc es to satisfy their rate requirement. However, in an OFDMA system, each user requires different number of su bcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Therefore, the Erlang B formula can not be employed to compute blocking p robability in an OFDMA network.In this paper, we determine an analytical expression to comput e the blocking probability of relay based cellular OFDMA network. We determine an expression of the probability distribution of the user’s resource requirement based on its experienced SIR. Then, we classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multi-dimensional system with different classes and evaluate the blocking probabili ty of system using the multi-dimensional Erlang loss formulas. This model is useful in the performance evaluation, design, planning of resources and call admission control of relay based cellular OFDMA networks like LTE.", "title": "" }, { "docid": "7c8f318224a5ca8ffd12ea32c2a560cf", "text": "BACKGROUND\nDaily bathing with chlorhexidine gluconate (CHG) is increasingly used in intensive care units to prevent hospital-associated infections, but limited evidence exists for noncritical care settings.\n\n\nMETHODS\nA prospective crossover study was conducted on 4 medical inpatient units in an urban, academic Canadian hospital from May 1, 2014-August 10, 2015. Intervention units used CHG over a 7-month period, including a 1-month wash-in phase, while control units used nonmedicated soap and water bathing. Rates of hospital-associated methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE) colonization or infection were the primary end point. Hospital-associated S. aureus were investigated for CHG resistance with a qacA/B and smr polymerase chain reaction (PCR) and agar dilution.\n\n\nRESULTS\nCompliance with daily CHG bathing was 58%. Hospital-associated MRSA and VRE was decreased by 55% (5.1 vs 11.4 cases per 10,000 inpatient days, P = .04) and 36% (23.2 vs 36.0 cases per 10,000 inpatient days, P = .03), respectively, compared with control cohorts. There was no significant difference in rates of hospital-associated Clostridium difficile. Chlorhexidine resistance testing identified 1 isolate with an elevated minimum inhibitory concentration (8 µg/mL), but it was PCR negative.\n\n\nCONCLUSIONS\nThis prospective pragmatic study to assess daily bathing for CHG on inpatient medical units was effective in reducing hospital-associated MRSA and VRE. A critical component of CHG bathing on medical units is sustained and appropriate application, which can be a challenge to accurately assess and needs to be considered before systematic implementation.", "title": "" } ]
scidocsrr
2e3f4dbfecdf6b4835e0c068b916cca7
What Motivates Consumers to Write Online Travel Reviews?
[ { "docid": "1993b540ff91922d381128e9c8592163", "text": "The use of the WWW as a venue for voicing opinions, complaints and recommendations on products and firms has been widely reported in the popular media. However little is known how consumers use these reviews and if they subsequently have any influence on evaluations and purchase intentions of products and retailers. This study examines the effect of negative reviews on retailer evaluation and patronage intention given that the consumer has already made a product/brand decision. Our results indicate that the extent of WOM search depends on the consumer’s reasons for choosing an online retailer. Further the influence of negative WOM information on perceived reliability and purchase intentions is determined largely by familiarity with the retailer and differs based on whether the retailer is a pure-Internet or clicks-and-mortar firm. Managerial implications for positioning strategies to minimize the effect of negative word-ofmouth have been discussed.", "title": "" }, { "docid": "c57cbe432fdab3f415d2c923bea905ff", "text": "Through Web-based consumer opinion platforms (e.g., epinions.com), the Internet enables customers to share their opinions on, and experiences with, goods and services with a multitude of other consumers; that is, to engage in electronic wordof-mouth (eWOM) communication. Drawing on findings from research on virtual communities and traditional word-of-mouth literature, a typology for motives of consumer online articulation is © 2004 Wiley Periodicals, Inc. and Direct Marketing Educational Foundation, Inc.", "title": "" } ]
[ { "docid": "39007b91989c42880ff96e7c5bdcf519", "text": "Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.", "title": "" }, { "docid": "7e439ac3ff2304b6e1aaa098ff44b0cb", "text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939", "title": "" }, { "docid": "1feaf48291b7ea83d173b70c23a3b7c0", "text": "Machine learning plays a critical role in extracting meaningful information out of the zetabytes of sensor data collected every day. For some applications, the goal is to analyze and understand the data to identify trends (e.g., surveillance, portable/wearable electronics); in other applications, the goal is to take immediate action based the data (e.g., robotics/drones, self-driving cars, smart Internet of Things). For many of these applications, local embedded processing near the sensor is preferred over the cloud due to privacy or latency concerns, or limitations in the communication bandwidth. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to throughput and accuracy requirements. Furthermore, flexibility is often required such that the processing can be adapted for different applications or environments (e.g., update the weights and model in the classifier). In many applications, machine learning often involves transforming the input data into a higher dimensional space, which, along with programmable weights, increases data movement and consequently energy consumption. In this paper, we will discuss how these challenges can be addressed at various levels of hardware design ranging from architecture, hardware-friendly algorithms, mixed-signal circuits, and advanced technologies (including memories and sensors).", "title": "" }, { "docid": "358423f8ef08080935f280d71ae921a0", "text": "Many of contemporary computer and machine vision applications require finding of corresponding points across multiple images. To that goal, among many features, the most commonly used are corner points. Corners are formed by two or more edges, and mark the boundaries of objects or boundaries between distinctive object parts. This makes corners the feature points that used in a wide range of tasks. Therefore, numerous corner detectors with different properties have been developed. In this paper, we present a complete FPGA architecture implementing corer detection. This architecture is based on the FAST algorithm. The proposed solution is capable of processing the incoming image data with the speed of hundreds of frames per second for a 512 × , 8-bit gray-scale image. The speed is comparable to the results achieved by top-of-the-shelf general purpose processors. However, the use of inexpensive FPGA allows to cut costs, power consumption and to reduce the footprint of a complete system solution. The paper includes also a brief description of the implemented algorithm, resource usage summary, resulting images, as well as block diagrams of the described architecture.", "title": "" }, { "docid": "6c07520a738f068f1bc3bdb8e3fda89b", "text": "We analyze the role of the Global Brain in the sharing economy, by synthesizing the notion of distributed intelligence with Goertzel’s concept of an offer network. An offer network is an architecture for a future economic system based on the matching of offers and demands without the intermediate of money. Intelligence requires a network of condition-action rules, where conditions represent challenges that elicit action in order to solve a problem or exploit an opportunity. In society, opportunities correspond to offers of goods or services, problems to demands. Tackling challenges means finding the best sequences of condition-action rules to connect all demands to the offers that can satisfy them. This can be achieved with the help of AI algorithms working on a public database of rules, demands and offers. Such a system would provide a universal medium for voluntary collaboration and economic exchange, efficiently coordinating the activities of all people on Earth. It would replace and subsume the patchwork of commercial and community-based sharing platforms presently running on the Internet. It can in principle resolve the traditional problems of the capitalist economy: poverty, inequality, externalities, poor sustainability and resilience, booms and busts, and the neglect of non-monetizable values.", "title": "" }, { "docid": "c49ed75ce48fb92db6e80e4fe8af7127", "text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.", "title": "" }, { "docid": "7c10a44e5fa0f9e01951e89336c4b4d6", "text": "Previous studies have examined the online research behaviors of graduate students in terms of how they seek and retrieve research-related information on the Web across diverse disciplines. However, few have focused on graduate students’ searching activities, and particularly for their research tasks. Drawing on Kuiper, Volman, and Terwel’s (2008) three aspects of web literacy skills (searching, reading, and evaluating), this qualitative study aims to better understand a group of graduate engineering students’ searching, reading, and evaluating processes for research purposes. Through in-depth interviews and the think-aloud protocol, we compared the strategies employed by 22 Taiwanese graduate engineering students. The results showed that the students’ online research behaviors included seeking and obtaining, reading and interpreting, and assessing and evaluating sources. The findings suggest that specialized training for preparing novice researchers to critically evaluate relevant information or scholarly work to fulfill their research purposes is needed. Implications for enhancing the information literacy of engineering students are discussed.", "title": "" }, { "docid": "1a65a6e22d57bb9cd15ba01943eeaa25", "text": "+ optimal local factor – expensive for general obs. + exploit conj. graph structure + arbitrary inference queries + natural gradients – suboptimal local factor + fast for general obs. – does all local inference – limited inference queries – no natural gradients ± optimal given conj. evidence + fast for general obs. + exploit conj. graph structure + arbitrary inference queries + some natural gradients", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "8770cfba83e16454e5d7244201d47628", "text": "Representing documents is a crucial component in many NLP tasks, for instance predicting aspect ratings in reviews. Previous methods for this task treat documents globally, and do not acknowledge that target categories are often assigned by their authors with generally no indication of the specific sentences that motivate them. To address this issue, we adopt a weakly supervised learning model, which jointly learns to focus on relevant parts of a document according to the context along with a classifier for the target categories. Derived from the weighted multiple-instance regression (MIR) framework, the model learns decomposable document vectors for each individual category and thus overcomes the representational bottleneck in previous methods due to a fixed-length document vector. During prediction, the estimated relevance or saliency weights explicitly capture the contribution of each sentence to the predicted rating, thus offering an explanation of the rating. Our model achieves state-of-the-art performance on multi-aspect sentiment analysis, improving over several baselines. Moreover, the predicted saliency weights are close to human estimates obtained by crowdsourcing, and increase the performance of lexical and topical features for review segmentation and summarization.", "title": "" }, { "docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c", "text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.", "title": "" }, { "docid": "80b514540933a9cc31136c8cb86ec9b3", "text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.", "title": "" }, { "docid": "18fd966db335ee53ff4d82781c2f81d8", "text": "Disastrous events are cordially involved with the momentum of nature. As such mishaps have been showing off own mastery, situations have gone beyond the control of human resistive mechanisms far ago. Fortunately, several technologies are in service to gain affirmative knowledge and analysis of a disaster’s occurrence. Recently, Internet of Things (IoT) paradigm has opened a promising door toward catering of multitude problems related to agriculture, industry, security, and medicine due to its attractive features, such as heterogeneity, interoperability, light-weight, and flexibility. This paper surveys existing approaches to encounter the relevant issues with disasters, such as early warning, notification, data analytics, knowledge aggregation, remote monitoring, real-time analytics, and victim localization. Simultaneous interventions with IoT are also given utmost importance while presenting these facts. A comprehensive discussion on the state-of-the-art scenarios to handle disastrous events is presented. Furthermore, IoT-supported protocols and market-ready deployable products are summarized to address these issues. Finally, this survey highlights open challenges and research trends in IoT-enabled disaster management systems.", "title": "" }, { "docid": "ca932a0b6b71f009f95bad6f2f3f8a38", "text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes", "title": "" }, { "docid": "8f957dab2aa6b186b61bc309f3f2b5c3", "text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "title": "" }, { "docid": "f509e4c35a4dbc7b7ba88711d8a7b0ea", "text": "The promises and potential of Big Data in transforming digital government services, governments, and the interaction between governments, citizens, and the business sector, are substantial. From \"smart\" government to transformational government, Big Data can foster collaboration; create real-time solutions to challenges in agriculture, health, transportation, and more; and usher in a new era of policy- and decision-making. There are, however, a range of policy challenges to address regarding Big Data, including access and dissemination; digital asset management, archiving and preservation; privacy; and security. This paper selectively reviews and analyzes the U.S. policy context regarding Big Data and offers recommendations aimed at facilitating Big Data initiatives.", "title": "" }, { "docid": "9e44f467f7fbcd2ab1c6886bbb0099c0", "text": "Email has become one of the fastest and most economical forms of communication. However, the increase of email users have resulted in the dramatic increase of spam emails during the past few years. In this paper, email data was classified using four different classifiers (Neural Network, SVM classifier, Naïve Bayesian Classifier, and J48 classifier). The experiment was performed based on different data size and different feature size. The final classification result should be ‘1’ if it is finally spam, otherwise, it should be ‘0’. This paper shows that simple J48 classifier which make a binary tree, could be efficient for the dataset which could be classified as binary tree.", "title": "" }, { "docid": "96508fe94ab9e47534f2cc09b4b186a8", "text": "A 300 GHz frequency synthesizer incorporating a triple-push VCO with Colpitts-based active varactor (CAV) and a divider with three-phase injection is introduced. The CAV provides frequency tunability, enhances harmonic power, and buffers/injects the VCO fundamental signal from/to the divider. The locking range of the divider is vastly improved due to the fact that the three-phase injection introduces larger allowable phase change and injection power into the divider loop. Implemented in 90 nm SiGe BiCMOS, the synthesizer achieves a phase-noise of -77.8 dBc/Hz (-82.5 dBc/Hz) at 100 kHz (1 MHz) offset with a crystal reference, and an overall locking range of 280.32-303.36 GHz (7.9%).", "title": "" }, { "docid": "7816f9fc22866f2c4f12313715076a20", "text": "Image-to-image translation has been made much progress with embracing Generative Adversarial Networks (GANs). However, it’s still very challenging for translation tasks that require high quality, especially at high-resolution and photorealism. In this paper, we present Discriminative Region Proposal Adversarial Networks (DRPAN) for highquality image-to-image translation. We decompose the procedure of imageto-image translation task into three iterated steps, first is to generate an image with global structure but some local artifacts (via GAN), second is using our DRPnet to propose the most fake region from the generated image, and third is to implement “image inpainting” on the most fake region for more realistic result through a reviser, so that the system (DRPAN) can be gradually optimized to synthesize images with more attention on the most artifact local part. Experiments on a variety of image-to-image translation tasks and datasets validate that our method outperforms state-of-the-arts for producing high-quality translation results in terms of both human perceptual studies and automatic quantitative measures.", "title": "" } ]
scidocsrr
1c134e6fa0f2c18e9624284fb32eda81
The Fallacy of the Net Promoter Score : Customer Loyalty Predictive Model
[ { "docid": "7401c7f3a396a76e9a806863bef7ff7c", "text": "Complexity surrounding the holistic nature of customer experience has made measuring customer perceptions of interactive service experiences, challenging. At the same time, advances in technology and changes in methods for collecting explicit customer feedback are generating increasing volumes of unstructured textual data, making it difficult for managers to analyze and interpret this information. Consequently, text mining, a method enabling automatic extraction of information from textual data, is gaining in popularity. However, this method has performed below expectations in terms of depth of analysis of customer experience feedback and accuracy. In this study, we advance linguistics-based text mining modeling to inform the process of developing an improved framework. The proposed framework incorporates important elements of customer experience, service methodologies and theories such as co-creation processes, interactions and context. This more holistic approach for analyzing feedback facilitates a deeper analysis of customer feedback experiences, by encompassing three value creation elements: activities, resources, and context (ARC). Empirical results show that the ARC framework facilitates the development of a text mining model for analysis of customer textual feedback that enables companies to assess the impact of interactive service processes on customer experiences. The proposed text mining model shows high accuracy levels and provides flexibility through training. As such, it can evolve to account for changing contexts over time and be deployed across different (service) business domains; we term it an “open learning” model. The ability to timely assess customer experience feedback represents a pre-requisite for successful co-creation processes in a service environment. Accepted as: Ordenes, F. V., Theodoulidis, B., Burton, J., Gruber, T., & Zaki, M. (2014). Analyzing Customer Experience Feedback Using Text Mining A Linguistics-Based Approach. Journal of Service Research, August, 17(3) 278-295.", "title": "" } ]
[ { "docid": "32f55ca936d96b92c1bf38d51cd183b3", "text": "Traditionally, a Certification Authority (CA) is required to sign, manage, verify and revoke public key certificates. Multiple CAs together form the CA-based Public Key Infrastructure (PKI). The use of a PKI forces one to place trust in the CAs, which have proven to be a single point-of-failure on multiple occasions. Blockchain has emerged as a transformational technology that replaces centralized trusted third parties with a decentralized, publicly verifiable, peer-to-peer data store which maintains data integrity among nodes through various consensus protocols. In this paper, we deploy three blockchain-based alternatives to the CA-based PKI for supporting IoT devices, based on Emercoin Name Value Service (NVS), smart contracts by Ethereum blockchain, and Ethereum Light Sync client. We compare these approaches with CA-based PKI and show that they are much more efficient in terms of computational and storage requirements in addition to providing a more robust and scalable PKI.", "title": "" }, { "docid": "48c28572e5eafda1598a422fa1256569", "text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.", "title": "" }, { "docid": "bc0294e230abff5c47d5db0d81172bbc", "text": "Pulse radiolysis experiments were used to characterize the intermediates formed from ibuprofen during electron beam irradiation in a solution of 0.1mmoldm(-3). For end product characterization (60)Co γ-irradiation was used and the samples were evaluated either by taking their UV-vis spectra or by HPLC with UV or MS detection. The reactions of OH resulted in hydroxycyclohexadienyl type radical intermediates. The intermediates produced in further reactions hydroxylated the derivatives of ibuprofen as final products. The hydrated electron attacked the carboxyl group. Ibuprofen degradation is more efficient under oxidative conditions than under reductive conditions. The ecotoxicity of the solution was monitored by Daphnia magna standard microbiotest and Vibrio fischeri luminescent bacteria test. The toxic effect of the aerated ibuprofen solution first increased upon irradiation indicating a higher toxicity of the first degradation products, then decreased with increasing absorbed dose.", "title": "" }, { "docid": "92625cb17367de65a912cb59ea767a19", "text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.", "title": "" }, { "docid": "b67fadb3f5dca0e74bebc498260f99a4", "text": "The interactive computation paradigm is reviewed and a particular example is extended to form the stochastic analog of a computational process via a transcription of a minimal Turing Machine into an equivalent asynchronous Cellular Automaton with an exponential waiting times distribution of effective transitions. Furthermore, a special toolbox for analytic derivation of recursive relations of important statistical and other quantities is introduced in the form of an Inductive Combinatorial Hierarchy.", "title": "" }, { "docid": "ff9ca485a07dca02434396eca0f0c94f", "text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.", "title": "" }, { "docid": "f60e01205f1760c3aac261a05dfd7695", "text": "The recommendation system is one of the core technologies for implementing personalization services. Recommendation systems in ubiquitous computing environment should have the capability of context-awareness. In this research, we developed a music recommendation system, which we shall call C_Music, which utilizes not only the user’s demographics and behavioral patterns but also the user’s context. For a specific user in a specific context, the C_Music recommends the music that the similar users listened most in the similar context. In evaluating the performance of C_Music using a real world data, it outperforms the comparative system that utilizes the user’s demographics and behavioral patterns only.", "title": "" }, { "docid": "dcc55431a2da871c60abfd53ce270bad", "text": "Synchrophasor Standards have evolved since the introduction of the first one, IEEE Standard 1344, in 1995. IEEE Standard C37.118-2005 introduced measurement accuracy under steady state conditions as well as interference rejection. In 2009, the IEEE started a joint project with IEC to harmonize real time communications in IEEE Standard C37.118-2005 with the IEC 61850 communication standard. These efforts led to the need to split the C37.118 into 2 different standards: IEEE Standard C37.118.1-2011 that now includes performance of synchrophasors under dynamic systems conditions; and IEEE Standard C37.118.2-2011 Synchrophasor Data Transfer for Power Systems, the object of this paper.", "title": "" }, { "docid": "3371fe8778b813360debc384040c510e", "text": "Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are \"on\" and \"off\" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases.", "title": "" }, { "docid": "216698730aa68b3044f03c64b77e0e62", "text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.", "title": "" }, { "docid": "dce032d1568e8012053de20fa7063c25", "text": "Radial visualization continues to be a popular design choice in information visualization systems, due perhaps in part to its aesthetic appeal. However, it is an open question whether radial visualizations are truly more effective than their Cartesian counterparts. In this paper, we describe an initial user trial from an ongoing empirical study of the SQiRL (Simple Query interface with a Radial Layout) visualization system, which supports both radial and Cartesian projections of stacked bar charts. Participants were shown 20 diagrams employing a mixture of radial and Cartesian layouts and were asked to perform basic analysis on each. The participants' speed and accuracy for both visualization types were recorded. Our initial findings suggest that, in spite of the widely perceived advantages of Cartesian visualization over radial visualization, both forms of layout are, in fact, equally usable. Moreover, radial visualization may have a slight advantage over Cartesian for certain tasks. In a follow-on study, we plan to test users' ability to create, as well as read and interpret, radial and Cartesian diagrams in SQiRL.", "title": "" }, { "docid": "b151343a4c1e365ede70a71880065aab", "text": "Cardiovascular disease (CVD) and depression are common. Patients with CVD have more depression than the general population. Persons with depression are more likely to eventually develop CVD and also have a higher mortality rate than the general population. Patients with CVD, who are also depressed, have a worse outcome than those patients who are not depressed. There is a graded relationship: the more severe the depression, the higher the subsequent risk of mortality and other cardiovascular events. It is possible that depression is only a marker for more severe CVD which so far cannot be detected using our currently available investigations. However, given the increased prevalence of depression in patients with CVD, a causal relationship with either CVD causing more depression or depression causing more CVD and a worse prognosis for CVD is probable. There are many possible pathogenetic mechanisms that have been described, which are plausible and that might well be important. However, whether or not there is a causal relationship, depression is the main driver of quality of life and requires prevention, detection, and management in its own right. Depression after an acute cardiac event is commonly an adjustment disorder than can improve spontaneously with comprehensive cardiac management. Additional management strategies for depressed cardiac patients include cardiac rehabilitation and exercise programmes, general support, cognitive behavioural therapy, antidepressant medication, combined approaches, and probably disease management programmes.", "title": "" }, { "docid": "e45c07c42c1a7f235dd5cb511c131d30", "text": "This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.", "title": "" }, { "docid": "637ca0ccdc858c9e84ffea1bd3531024", "text": "We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.", "title": "" }, { "docid": "b7851d3e08d29d613fd908d930afcd6b", "text": "Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78.", "title": "" }, { "docid": "e9f9a7c506221bacf966808f54c4f056", "text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.", "title": "" }, { "docid": "282480e24a35a922a6498dbf88f34603", "text": "BACKGROUND\nThere is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes.\n\n\nMETHODS\nThe DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software.\n\n\nRESULTS\nTo achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items.\n\n\nCONCLUSION\nThe results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "21e536e7197ad878db7938c636d1640b", "text": "The Cloud computing has become the fast spread in the field of computing, research and industry in the last few years. As part of the service offered, there are new possibilities to build applications and provide various services to the end user by virtualization through the internet. Task scheduling is the most significant matter in the cloud computing because the user has to pay for resource using on the basis of time, which acts to distribute the load evenly among the system resources by maximizing utilization and reducing task execution Time. Many heuristic algorithms have been existed to resolve the task scheduling problem such as a Particle Swarm Optimization algorithm (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Cuckoo search (CS) algorithms, etc. In this paper, a Dynamic Adaptive Particle Swarm Optimization algorithm (DAPSO) has been implemented to enhance the performance of the basic PSO algorithm to optimize the task runtime by minimizing the makespan of a particular task set, and in the same time, maximizing resource utilization. Also, .a task scheduling algorithm has been proposed to schedule the independent task over the Cloud Computing. The proposed algorithm is considered an amalgamation of the Dynamic PSO (DAPSO) algorithm and the Cuckoo search (CS) algorithm; called MDAPSO. According to the experimental results, it is found that MDAPSO and DAPSO algorithms outperform the original PSO algorithm. Also, a comparative study has been done to evaluate the performance of the proposed MDAPSO with respect to the original PSO.", "title": "" }, { "docid": "ad854ceb89e437ca59099453db33fa41", "text": "Semi-supervised learning has recently emerged as a new paradigm in the machine learning community. It aims at exploiting simultaneously labeled and unlabeled data for classification. We introduce here a new semi-supervised algorithm. Its originality is that it relies on a discriminative approach to semisupervised learning rather than a generative approach, as it is usually the case. We present in details this algorithm for a logistic classifier and show that it can be interpreted as an instance of the Classification Expectation Maximization algorithm. We also provide empirical results on two data sets for sentence classification tasks and analyze the behavior of our methods.", "title": "" } ]
scidocsrr
575a09d1ef1455c43c8e2306e8b5f04c
Path Loss Estimation for Wireless Underground Sensor Network in Agricultural Application
[ { "docid": "e062d88651a8bdc637ecf57b4cbb1b2b", "text": "Wireless Underground Sensor Networks (WUSNs) consist of wirelessly connected underground sensor nodes that communicate untethered through soil. WUSNs have the potential to impact a wide variety of novel applications including intelligent irrigation, environment monitoring, border patrol, and assisted navigation. Although its deployment is mainly based on underground sensor nodes, a WUSN still requires aboveground devices for data retrieval, management, and relay functionalities. Therefore, the characterization of the bi-directional communication between a buried node and an aboveground device is essential for the realization of WUSNs. In this work, empirical evaluations of underground-to- aboveground (UG2AG) and aboveground-to-underground (AG2UG) communication are presented. More specifically, testbed experiments have been conducted with commodity sensor motes in a real-life agricultural field. The results highlight the asymmetry between UG2AG and AG2UG communication with distinct behaviors for different burial depths. To combat the adverse effects of the change in wavelength in soil, an ultra wideband antenna scheme is deployed, which increases the communication range by more than 350% compared to the original antennas. The results also reveal that a 21% increase in the soil moisture decreases the communication range by more than 70%. To the best of our knowledge, this is the first empirical study that highlights the effects of the antenna design, burial depth, and soil moisture on both UG2AG and AG2UG communication performance. These results have a significant impact on the development of multi-hop networking protocols for WUSNs.", "title": "" } ]
[ { "docid": "9ee0c9aa2868ea2a12c3a368b4744f35", "text": "To assess the efficacy and feasibility of vertebroplasty and posterior short-segment pedicle screw fixation for the treatment of traumatic lumbar burst fractures. Short-segment pedicle screw instrumentation is a well described technique to reduce and stabilize thoracic and lumbar spine fractures. It is relatively a easy procedure but can only indirectly reduce a fractured vertebral body, and the means of augmenting the anterior column are limited. Hardware failure and a loss of reduction are recognized complications caused by insufficient anterior column support. Patients with traumatic lumbar burst fractures without neurologic deficits were included. After a short segment posterior reduction and fixation, bilateral transpedicular reduction of the endplate was performed using a balloon, and polymethyl methacrylate cement was injected. Pre-operative and post-operative central and anterior heights were assessed with radiographs and MRI. Sixteen patients underwent this procedure, and a substantial reduction of the endplates could be achieved with the technique. All patients recovered uneventfully, and the neurologic examination revealed no deficits. The post-operative radiographs and magnetic resonance images demonstrated a good fracture reduction and filling of the bone defect without unwarranted bone displacement. The central and anterior height of the vertebral body could be restored to 72 and 82% of the estimated intact height, respectively. Complications were cement leakage in three cases without clinical implications and one superficial wound infection. Posterior short-segment pedicle fixation in conjunction with balloon vertebroplasty seems to be a feasible option in the management of lumbar burst fractures, thereby addressing all the three columns through a single approach. Although cement leakage occurred but had no clinical consequences or neurological deficit.", "title": "" }, { "docid": "c4814dea797964107d2178c265eba0b2", "text": "•We propose to combine string kernels (low-level character n-gram features) and word embeddings (high-level semantic features) for automated essay scoring (AED) •TOK, string kernels have never been used for AED •TOK, this is the first successful attempt to combine string kernels and word embeddings •Using a shallow approach, we surpass recent deep learning approaches [Dong et al, EMNLP 2016; Dong et al, CONLL 2017;Tay et al, AAAI 2018]", "title": "" }, { "docid": "0867ccf808dda2d08195a6cbd8f83514", "text": "Existing algorithms for joint clustering and feature selection can be categorized as either global or local approaches. Global methods select a single cluster-independent subset of features, whereas local methods select cluster-specific subsets of features. In this paper, we present a unified probabilistic model that can perform both global and local feature selection for clustering. Our approach is based on a hierarchical beta-Bernoulli prior combined with a Dirichlet process mixture model. We obtain global or local feature selection by adjusting the variance of the beta prior. We provide a variational inference algorithm for our model. In addition to simultaneously learning the clusters and features, this Bayesian formulation allows us to learn both the number of clusters and the number of features to retain. Experiments on synthetic and real data show that our unified model can find global and local features and cluster data as well as competing methods of each type.", "title": "" }, { "docid": "2c38b6af96d8393660c4c700b9322f7a", "text": "According to what we call the Principle of Procreative Beneficence (PB),couples who decide to have a child have a significant moral reason to select the child who, given his or her genetic endowment, can be expected to enjoy the most well-being. In the first part of this paper, we introduce PB,explain its content, grounds, and implications, and defend it against various objections. In the second part, we argue that PB is superior to competing principles of procreative selection such as that of procreative autonomy.In the third part of the paper, we consider the relation between PB and disability. We develop a revisionary account of disability, in which disability is a species of instrumental badness that is context- and person-relative.Although PB instructs us to aim to reduce disability in future children whenever possible, it does not privilege the normal. What matters is not whether future children meet certain biological or statistical norms, but what level of well-being they can be expected to have.", "title": "" }, { "docid": "f3e39ffeec0da10294073b9899d8f016", "text": "Nomophobia is considered a modern age phobia introduced to our lives as a byproduct of the interaction between people and mobile information and communication technologies, especially smartphones. This study sought to contribute to the nomophobia research literature by identifying and describing the dimensions of nomophobia and developing a questionnaire to measure nomophobia. Consequently, this study adopted a two-phase, exploratory sequential mixed methods design. The first phase was a qualitative exploration of nomophobia through semi-structured interviews conducted with nine undergraduate students at a large Midwestern university in the U.S. As a result of the first phase, four dimensions of nomophobia were identified: not being able to communicate, losing connectedness, not being able to access information and giving up convenience. The qualitative findings from this initial exploration were then developed into a 20-item nomophobia questionnaire (NMP-Q). In the second phase, the NMP-Q was validated with a sample of 301 undergraduate students. Exploratory factor analysis revealed a four-factor structure for the NMP-Q, corresponding to the dimensions of nomophobia. The NMP-Q was shown to produce valid and reliable scores; and thus, can be used to assess the severity of nomophobia. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "52c07b30d95ab7dc74b5be8a2a60ea91", "text": "Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-theart approaches when coming to extremely low bit neural network.", "title": "" }, { "docid": "e31d58cd45dec35e439f67b7f53d7f20", "text": "The altered energy metabolism of tumor cells provides a viable target for a non toxic chemotherapeutic approach. An increased glucose consumption rate has been observed in malignant cells. Warburg (Nobel Laureate in medicine) postulated that the respiratory process of malignant cells was impaired and that the transformation of a normal cell to malignant was due to defects in the aerobic respiratory pathways. Szent-Györgyi (Nobel Laureate in medicine) also viewed cancer as originating from insufficient availability of oxygen. Oxygen by itself has an inhibitory action on malignant cell proliferation by interfering with anaerobic respiration (fermentation and lactic acid production). Interestingly, during cell differentiation (where cell energy level is high) there is an increased cellular production of oxidants that appear to provide one type of physiological stimulation for changes in gene expression that may lead to a terminal differentiated state. The failure to maintain high ATP production (high cell energy levels) may be a consequence of inactivation of key enzymes, especially those related to the Krebs cycle and the electron transport system. A distorted mitochondrial function (transmembrane potential) may result. This aspect could be suggestive of an important mitochondrial involvement in the carcinogenic process in addition to presenting it as a possible therapeutic target for cancer. Intermediate metabolic correction of the mitochondria is postulated as a possible non-toxic therapeutic approach for cancer.", "title": "" }, { "docid": "5067a208fd8ad389482fbe49e7c79b1f", "text": "Even though main memory is becoming large enough to fit most OLTP databases, it may not always be the best option. OLTP workloads typically exhibit skewed access patterns where some records are hot (frequently accessed) but many records are cold (infrequently or never accessed). Therefore, it is more economical to store the coldest records on a fast secondary storage device such as a solid-state disk. However, main-memory DBMS have no knowledge of secondary storage, while traditional disk-based databases, designed for workloads where data resides on HDD, introduce too much overhead for the common case where the working set is memory resident.\n In this paper, we propose a simple and low-overhead technique that enables main-memory databases to efficiently migrate cold data to secondary storage by relying on the OS's virtual memory paging mechanism. We propose to log accesses at the tuple level, process the access traces offline to identify relevant access patterns, and then transparently re-organize the in-memory data structures to reduce paging I/O and improve hit rates. The hot/cold data separation is performed on demand and incrementally through careful memory management, without any change to the underlying data structures. We validate experimentally the data re-organization proposal and show that OS paging can be efficient: a TPC-C database can grow two orders of magnitude larger than the available memory size without a noticeable impact on performance.", "title": "" }, { "docid": "e934c6e5797148d9cfa6cff5e3bec698", "text": "Ego level is a broad construct that summarizes individual differences in personality development 1 . We examine ego level as it is represented in natural language, using a composite sample of four datasets comprising nearly 44,000 responses. We find support for a developmental sequence in the structure of correlations between ego levels, in analyses of Linguistic Inquiry and Word Count (LIWC) categories 2 and in an examination of the individual words that are characteristic of each level. The LIWC analyses reveal increasing complexity and, to some extent, increasing breadth of perspective with higher levels of development. The characteristic language of each ego level suggests, for example, a shift from consummatory to appetitive desires at the lowest stages, a dawning of doubt at the Self-aware stage, the centrality of achievement motivation at the Conscientious stage, an increase in mutuality and intellectual growth at the Individualistic stage and some renegotiation of life goals and reflection on identity at the highest levels of development. Continuing empirical analysis of ego level and language will provide a deeper understanding of ego development, its relationship with other models of personality and individual differences, and its utility in characterizing people, texts and the cultural contexts that produce them. A linguistic analysis of nearly 44,000 responses to the Washington University Sentence Completion Test elucidates the construct of ego development (personality development through adulthood) and identifies unique linguistic markers of each level of development.", "title": "" }, { "docid": "4bfead8529019e084465d4583685434b", "text": "Task: Paraphrase generation Problem: The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words. Therefore, the generated sentences are often grammatically correct but semantically improper. Proposal: a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words. Example of RNN Generated Summary Text: 昨晚,中联航空成都飞北京一架航班被发现有多人吸烟。后 因天气原因,飞机备降太原机场。有乘客要求重新安检,机长决 定继续飞行,引起机组人员与未吸烟乘客冲突。 Last night, several people were caught to smoke on a flight of China United Airlines from Chendu to Beijing. Later the flight temporarily landed on Taiyuan Airport. Some passengers asked for a security check but were denied by the captain, which led to a collision between crew and passengers. RNN: 中联航空机场发生爆炸致多人死亡。 China United Airlines exploded in the airport, leaving several people dead. Gold: 航班多人吸烟机组人员与乘客冲突。 Several people smoked on a flight which led to a collision between crew and passengers. Proposed Model Our Semantic Relevance Based neural model. It consists of decoder (above), encoder (below) and cosine similarity function. Experiments Dataset: Large Scale Chinese Short Text Summarization Dataset (LCSTS) Results of our model and baseline systems. Our models achieve substantial improvement of all ROUGE scores over baseline systems. (W: Word level; C: Character level). Example of SRB Generated Summary Text: 仔细一算,上海的互联网公司不乏成功案例,但最终成为BAT一 类巨头的几乎没有,这也能解释为何纳税百强的榜单中鲜少互联网公司 的身影。有一类是被并购,比如:易趣、土豆网、PPS、PPTV、一号店 等;有一类是数年偏安于细分市场。 With careful calculation, there are many successful Internet companies in Shanghai, but few of them becomes giant company like BAT. This is also the reason why few Internet companies are listed in top hundred companies of paying tax. Some of them are merged, such as Ebay, Tudou, PPS, PPTV, Yihaodian and so on. Others are satisfied with segment market for years. Gold:为什么上海出不了互联网巨头? Why Shanghai comes out no giant company? RNN context:上海的互联网巨头。 Shanghai's giant company. SRB:上海鲜少互联网巨头的身影。 Shanghai has few giant companies. Proposed Model Text Representation Source text representation Vt = hN Generated summary representation Vs = sM − hN Semantic Relevance cosine similarity function cos Vs, Vt = Vt∙Vs Vt Vs Training Objective function L = −p y x; θ − λ cos Vs, Vt Conclusion Our work aims at improving semantic relevance of generated summaries and source texts for Chinese social media text summarization. Our model is able to transform the text and the summary into a dense vector, and encourage high similarity of their representation. Experiments show that our model outperforms baseline systems, and the generated summary has higher semantic relevance.", "title": "" }, { "docid": "e106afaefd5e61f4a5787a7ae0c92934", "text": "Novelty detection is concerned with recognising inputs that differ in some way from those that are usually seen. It is a useful technique in cases where an important class of data is under-represented in the training set. This means that the performance of the network will be poor for those classes. In some circumstances, such as medical data and fault detection, it is often precisely the class that is under-represented in the data, the disease or potential fault, that the network should detect. In novelty detection systems the network is trained only on the negative examples where that class is not present, and then detects inputs that do not fits into the model that it has acquired, that it, members of the novel class. This paper reviews the literature on novelty detection in neural networks and other machine learning techniques, as well as providing brief overviews of the related topics of statistical outlier detection and novelty detection in biological organisms.", "title": "" }, { "docid": "db2953ae2d59e74b8b650963d32a9f1f", "text": "In this paper we describe the design and preliminary evaluation of an energetically-autonomous powered knee exoskeleton to facilitate running. The device consists of a knee brace in which a motorized mechanism actively places and removes a spring in parallel with the knee joint. This mechanism is controlled such that the spring is in parallel with the knee joint from approximately heel-strike to toe-off, and is removed from this state during the swing phase of running. In this way, the spring is intended to store energy at heel-strike which is then released when the heel leaves the ground, reducing the effort required by the quadriceps to exert this energy, thereby reducing the metabolic cost of running.", "title": "" }, { "docid": "aa6dd2e44b992dd7f11c5d82f0b11556", "text": "It is well known that violent video games increase aggression, and that stress increases aggression. Many violent video games can be stressful because enemies are trying to kill players. The present study investigates whether violent games increase aggression by inducing stress in players. Stress was measured using cardiac coherence, defined as the synchronization of the rhythm of breathing to the rhythm of the heart. We predicted that cardiac coherence would mediate the link between exposure to violent video games and subsequent aggression. Specifically, we predicted that playing a violent video game would decrease cardiac coherence, and that cardiac coherence, in turn, would correlate negatively with aggression. Participants (N = 77) played a violent or nonviolent video game for 20 min. Cardiac coherence was measured before and during game play. After game play, participants had the opportunity to blast a confederate with loud noise through headphones during a reaction time task. The intensity and duration of noise blasts given to the confederate was used to measure aggression. As expected, violent video game players had lower cardiac coherence levels and higher aggression levels than did nonviolent game players. Cardiac coherence, in turn, was negatively related to aggression. This research offers another possible reason why violent games can increase aggression-by inducing stress. Cardiac coherence can be a useful tool to measure stress induced by violent video games. Cardiac coherence has several desirable methodological features as well: it is noninvasive, stable against environmental disturbances, relatively inexpensive, not subject to demand characteristics, and easy to use.", "title": "" }, { "docid": "4d7b4fe86b906baae887c80e872d71a4", "text": "The use of serologic testing and its value in the diagnosis of Lyme disease remain confusing and controversial for physicians, especially concerning persons who are at low risk for the disease. The approach to diagnosing Lyme disease varies depending on the probability of disease (based on endemicity and clinical findings) and the stage at which the disease may be. In patients from endemic areas, Lyme disease may be diagnosed on clinical grounds alone in the presence of erythema migrans. These patients do not require serologic testing, although it may be considered according to patient preference. When the pretest probability is moderate (e.g., in a patient from a highly or moderately endemic area who has advanced manifestations of Lyme disease), serologic testing should be performed with the complete two-step approach in which a positive or equivocal serology is followed by a more specific Western blot test. Samples drawn from patients within four weeks of disease onset are tested by Western blot technique for both immunoglobulin M and immunoglobulin G antibodies; samples drawn more than four weeks after disease onset are tested for immunoglobulin G only. Patients who show no objective signs of Lyme disease have a low probability of the disease, and serologic testing in this group should be kept to a minimum because of the high risk of false-positive results. When unexplained nonspecific systemic symptoms such as myalgia, fatigue, and paresthesias have persisted for a long time in a person from an endemic area, serologic testing should be performed with the complete two-step approach described above.", "title": "" }, { "docid": "678558c9c8d629f98b77a61082bd9b95", "text": "Internet of Things (IoT) makes all objects become interconnected and smart, which has been recognized as the next technological revolution. As its typical case, IoT-based smart rehabilitation systems are becoming a better way to mitigate problems associated with aging populations and shortage of health professionals. Although it has come into reality, critical problems still exist in automating design and reconfiguration of such a system enabling it to respond to the patient's requirements rapidly. This paper presents an ontology-based automating design methodology (ADM) for smart rehabilitation systems in IoT. Ontology aids computers in further understanding the symptoms and medical resources, which helps to create a rehabilitation strategy and reconfigure medical resources according to patients' specific requirements quickly and automatically. Meanwhile, IoT provides an effective platform to interconnect all the resources and provides immediate information interaction. Preliminary experiments and clinical trials demonstrate valuable information on the feasibility, rapidity, and effectiveness of the proposed methodology.", "title": "" }, { "docid": "d34b81ac6c521cbf466b4b898486a201", "text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.", "title": "" }, { "docid": "3938e2e498724d5cb3c4875439c06a98", "text": "To enable collaboration and communication between humans and agents, this paper investigates learning to acquire commonsense evidence for action justification. In particular, we have developed an approach based on the generative Conditional Variational Autoencoder (CVAE) that models object relations/attributes of the world as latent variables and jointly learns a performer that predicts actions and an explainer that gathers commonsense evidence to justify the action. Our empirical results have shown that, compared to a typical attention-based model, CVAE achieves significantly higher performance in both action prediction and justification. A human subject study further shows that the commonsense evidence gathered by CVAE can be communicated to humans to achieve a significantly higher common ground between humans and agents.", "title": "" }, { "docid": "fb128fdbd2975edee014ad86113595dd", "text": "Recurrent neural networks have become ubiquitous in computing representations of sequential data, especially textual data in natural language processing. In particular, Bidirectional LSTMs are at the heart of several neural models achieving state-of-the-art performance in a wide variety of tasks in NLP. However, BiLSTMs are known to suffer from sequential bias – the contextual representation of a token is heavily influenced by tokens close to it in a sentence. We propose a general and effective improvement to the BiLSTM model which encodes each suffix and prefix of a sequence of tokens in both forward and reverse directions. We call our model Suffix Bidirectional LSTM or SuBiLSTM. This introduces an alternate bias that favors long range dependencies. We apply SuBiLSTMs to several tasks that require sentence modeling. We demonstrate that using SuBiLSTM instead of a BiLSTM in existing models leads to improvements in performance in learning general sentence representations, text classification, textual entailment and paraphrase detection. Using SuBiLSTM we achieve new state-of-the-art results for fine-grained sentiment classification and question classification.", "title": "" }, { "docid": "07e69863c4c6531e310b0302d290cbad", "text": "Recently two-stage detectors have surged ahead of single-shot detectors in the accuracy-vs-speed trade-off. Nevertheless single-shot detectors are immensely popular in embedded vision applications. This paper brings singleshot detectors up to the same level as current two-stage techniques. We do this by improving training for the stateof-the-art single-shot detector, RetinaNet, in three ways: integrating instance mask prediction for the first time, making the loss function adaptive and more stable, and including additional hard examples in training. We call the resulting augmented network RetinaMask. The detection component of RetinaMask has the same computational cost as the original RetinaNet, but is more accurate. COCO test-dev results are up to 41.4 mAP for RetinaMask-101 vs 39.1mAP for RetinaNet-101, while the runtime is the same during evaluation. Adding Group Normalization increases the performance of RetinaMask-101 to 41.7 mAP. Code is at: https://github.com/chengyangfu/", "title": "" }, { "docid": "7b851dc49265c7be5199fb887305b0f5", "text": "— A set of customers with known locations and known requirements for some commodity, is to be supplied from a single depot by delivery vehicles o f known capacity. The problem of designing routes for these vehicles so as to minimise the cost of distribution is known as the vehicle routing problem ( VRP). In this paper we catégorise, discuss and extend both exact and approximate methods for solving VRP's, and we give some results on the properties offeasible solutions which help to reduce the computational effort invohed in solving such problems.", "title": "" } ]
scidocsrr
356e43814fbc7d56ff24b4e399dae0cd
An Investigation of Gamification Typologies for Enhancing Learner Motivation
[ { "docid": "372ab07026a861acd50e7dd7c605881d", "text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.", "title": "" }, { "docid": "84647b51dbbe755534e1521d9d9cf843", "text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>", "title": "" } ]
[ { "docid": "9e3d3783aa566b50a0e56c71703da32b", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "36c3bd9e1203b9495d92a40c5fa5f2c0", "text": "A 14-year-old boy presented with asymptomatic right hydronephrosis detected on routine yearly ultrasound examination. Previously, he had at least two normal renal ultrasonograms, 4 years after remission of acute myeloblastic leukemia, treated by AML-BFM-93 protocol. A function of the right kidney and no damage on the left was confirmed by a DMSA scan. Right retroperitoneoscopic nephrectomy revealed 3 renal arteries with the lower pole artery lying on the pelviureteric junction. Histologically chronic tubulointerstitial nephritis was detected. In the pathogenesis of this severe unilateral renal damage, we suspect the exacerbation of deleterious effects of cytostatic therapy on kidneys with intermittent hydronephrosis.", "title": "" }, { "docid": "4b0cf6392d84a0cc8ab80c6ed4796853", "text": "This paper introduces the Finite-State TurnTaking Machine (FSTTM), a new model to control the turn-taking behavior of conversational agents. Based on a non-deterministic finite-state machine, the FSTTM uses a cost matrix and decision theoretic principles to select a turn-taking action at any time. We show how the model can be applied to the problem of end-of-turn detection. Evaluation results on a deployed spoken dialog system show that the FSTTM provides significantly higher responsiveness than previous approaches.", "title": "" }, { "docid": "0f1f3dc24dda58837db83817bca53c58", "text": "Deep neural networks have been successfully applied to numerous machine learning tasks because of their impressive feature abstraction capabilities. However, conventional deep networks assume that the training and test data are sampled from the same distribution, and this assumption is often violated in real-world scenarios. To address the domain shift or data bias problems, we introduce layer-wise domain correction (LDC), a new unsupervised domain adaptation algorithm which adapts an existing deep network through additive correction layers spaced throughout the network. Through the additive layers, the representations of source and target domains can be perfectly aligned. The corrections that are trained via maximum mean discrepancy, adapt to the target domain while increasing the representational capacity of the network. LDC requires no target labels, achieves state-of-the-art performance across several adaptation benchmarks, and requires significantly less training time than existing adaptation methods.", "title": "" }, { "docid": "e7e24b5c2a7f1b9ec49099ec1abd2969", "text": "In this paper, we propose a novel junction detection method in handwritten images, which uses the stroke-length distribution in every direction around a reference point inside the ink of texts. Our proposed junction detection method is simple and efficient, and yields a junction feature in a natural manner, which can be considered as a local descriptor. We apply our proposed junction detector to writer identification by Junclets which is a codebook-based representation trained from the detected junctions. A new challenging data set which contains multiple scripts (English and Chinese) written by the same writers is introduced to evaluate the performance of the proposed junctions for cross-script writer identification. Furthermore, two other common data sets are used to evaluate our junction-based descriptor. Experimental results show that our proposed junction detector is stable under rotation and scale changes, and the performance of writer identification indicates that junctions are important atomic elements to characterize the writing styles. The proposed junction detector is applicable to both historical documents and modern handwritings, and can be used as well for junction retrieval. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c678ea5e9bc8852ec80a8315a004c7f0", "text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.", "title": "" }, { "docid": "290b56471b64e150e40211f7a51c1237", "text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.", "title": "" }, { "docid": "9cb567317559ada8baec5b6a611e68d0", "text": "Fungal bioactive polysaccharides deriving mainly from the Basidiomycetes family (and some from the Ascomycetes) and medicinal mushrooms have been well known and widely used in far Asia as part of traditional diet and medicine, and in the last decades have been the core of intense research for the understanding and the utilization of their medicinal properties in naturally produced pharmaceuticals. In fact, some of these biopolymers (mainly β-glucans or heteropolysaccharides) have already made their way to the market as antitumor, immunostimulating or prophylactic drugs. The fact that many of these biopolymers are produced by edible mushrooms makes them also very good candidates for the formulation of novel functional foods and nutraceuticals without any serious safety concerns, in order to make use of their immunomodulating, anticancer, antimicrobial, hypocholesterolemic, hypoglycemic and health-promoting properties. This article summarizes the most important properties and applications of bioactive fungal polysaccharides and discusses the latest developments on the utilization of these biopolymers in human nutrition.", "title": "" }, { "docid": "8ee3d3200ed95cad5ff4ed77c08bb608", "text": "We present a rare case of a non-fatal impalement injury of the brain. A 13-year-old boy was found in his classroom unconsciously lying on floor. His classmates reported that they had been playing, and throwing building bricks, when suddenly the boy collapsed. The emergency physician did not find significant injuries. Upon admission to a hospital, CT imaging revealed a \"blood path\" through the brain. After clinical forensic examination, an impalement injury was diagnosed, with the entry wound just below the left eyebrow. Eventually, the police presented a variety of pointers that were suspected to have caused the injury. Forensic trace analysis revealed human blood on one of the pointers, and subsequent STR analysis linked the blood to the injured boy. Confronted with the results of the forensic examination, the classmates admitted that they had been playing \"sword fights\" using the pointers, and that the boy had been hit during the game. The case illustrates the difficulties of diagnosing impalement injuries, and identifying the exact cause of the injury.", "title": "" }, { "docid": "52b46bda93c5d426d59132110d78830c", "text": "We introduce in a unitary way the paradigm of radiofrequency identification (RFID) merged with the technology of Unmanned Aerial Vehicles (UAV) giving rise to RFIDrone devices. Such family comprises the READER-Drone, which is a suitable UAV integrated with an autonomous RFID reader to act as mobile scanner of the environment, and the TAG-Drone, a UAV only equipped with an RFID sensor tag that hence becomes a mobile and automatically re-positioned sensor. We shows some handy electromagnetic models to identify the upper-bound communication performance of RFIDrone in close proximity of a scattering surface and we resume the results of some preliminary open-air experimentation corroborating the theoretical analysis.", "title": "" }, { "docid": "6fa5a58e0f0af633f56418fb4b4808e9", "text": "We report a low-temperature process for covalent bonding of thermal SiO2 to plasma-enhanced chemical vapor deposited (PECVD) SiO2 for Si-compound semiconductor integration. A record-thin interfacial oxide layer of 60 nm demonstrates sufficient capability for gas byproduct diffusion and absorption, leading to a high surface energy of 2.65 J/m after a 2-h 300 C anneal. O2 plasma treatment and surface chemistry optimization in dilute hydrofluoric (HF) solution and NH4OH vapor efficiently suppress the small-size interfacial void density down to 2 voids/cm, dramatically increasing the wafer-bonded device yield. Bonding-induced strain, as determined by x-ray diffraction measurements, is negligible. The demonstration of a 50 mm InP epitaxial layer transferred to a silicon-on-insulator (SOI) substrate shows the promise of the method for wafer-scale applications.", "title": "" }, { "docid": "301bc00e99607569dcba6317ebb2f10d", "text": "Bandwidth and gain enhancement of microstrip patch antennas (MPAs) is proposed using reflective metasurface (RMS) as a superstrate. Two different types of the RMS, namelythe double split-ring resonator (DSR) and double closed-ring resonator (DCR) are separately investigated. The two antenna prototypes were manufactured, measured and compared. The experimental results confirm that the RMS loaded MPAs achieve high-gain as well as bandwidth improvement. The desinged antenna using the RMS as a superstrate has a high-gain of over 9.0 dBi and a wide impedance bandwidth of over 13%. The RMS is also utilized to achieve a thin antenna with a cavity height of 6 mm, which is equivalent to λ/21 at the center frequency of 2.45 GHz. At the same time, the cross polarization level and front-to-back ratio of these antennas are also examined. key words: wideband, high-gain, metamaterial, Fabry-Perot cavity (FPC), frequency selective surface (FSS)", "title": "" }, { "docid": "a3c011d846fed4f910cd3b112767ccc1", "text": "Tooth morphometry is known to be influenced by cultural, environmental and racial factors. Tooth size standards can be used in age and sex determination. One hundred models (50 males & 50 females) of normal occlusion were evaluated and significant correlations (p<0.001) were found to exist between the combined maxillary incisor widths and the maxillary intermolar and interpremolar arch widths. The study establishes the morphometric criterion for premolar and molar indices and quantifies the existence of a statistically significant sexual dimorphism in arch widths (p<0.02). INTRODUCTION Teeth are an excellent material in living and non-living populations for anthropological, genetic, odontologic and forensic investigations 1 .Their morphometry is known to be influenced by cultural, environmental and racial factors. The variations in tooth form are a common occurrence & these can be studied by measurements. Out of the two proportionswidth and length, the former is considered to be more important 2 . Tooth size standards can be used in age and sex determination 3 . Whenever it is possible to predict the sex, identification is simplified because then only missing persons of one sex need to be considered. In this sense identification of sex takes precedence over age 4 . Various features like tooth morphology and crown size are characteristic for males and females 5 .The present study on the maxillary arch takes into account the premolar arch width, molar arch width and the combined width of the maxillary central incisors in both the sexes. Pont's established constant ratio's between tooth sizes and arch widths in French population which came to be known as premolar and molar indices 6 .In the ideal dental arch he concluded that the ratio of combined incisor width to transverse arch width was .80 in the premolar area and .64 in the molar area. There has been a recent resurgence of interest in the clinical use of premolar and molar indices for establishing dental arch development objectives 7 . The present study was conducted to ascertain whether or not Pont's Index can be used reliably on north Indians and to establish the norms for the same. MATERIAL AND METHODS SELECTION CRITERIA One hundred subjects, fifty males and fifty females in the age group of 17-21 years were selected for the study as attrition is considered to be minimal for this age group. The study was conducted on the students of Sudha Rustagi College of Dental Sciences & Research, Faridabad, Haryana. INCLUSION CRITERIA Healthy state of gingival and peridontium.", "title": "" }, { "docid": "d229c679dcd4fa3dd84c6040b95fc99c", "text": "This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.", "title": "" }, { "docid": "771834bc4bfe8231fe0158ec43948bae", "text": "Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner.", "title": "" }, { "docid": "0520c57f2cd13ce423e656d89c7f3cc0", "text": "The term ‘‘urban stream syndrome’’ describes the consistently observed ecological degradation of streams draining urban land. This paper reviews recent literature to describe symptoms of the syndrome, explores mechanisms driving the syndrome, and identifies appropriate goals and methods for ecological restoration of urban streams. Symptoms of the urban stream syndrome include a flashier hydrograph, elevated concentrations of nutrients and contaminants, altered channel morphology, and reduced biotic richness, with increased dominance of tolerant species. More research is needed before generalizations can be made about urban effects on stream ecosystem processes, but reduced nutrient uptake has been consistently reported. The mechanisms driving the syndrome are complex and interactive, but most impacts can be ascribed to a few major large-scale sources, primarily urban stormwater runoff delivered to streams by hydraulically efficient drainage systems. Other stressors, such as combined or sanitary sewer overflows, wastewater treatment plant effluents, and legacy pollutants (long-lived pollutants from earlier land uses) can obscure the effects of stormwater runoff. Most research on urban impacts to streams has concentrated on correlations between instream ecological metrics and total catchment imperviousness. Recent research shows that some of the variance in such relationships can be explained by the distance between the stream reach and urban land, or by the hydraulic efficiency of stormwater drainage. The mechanisms behind such patterns require experimentation at the catchment scale to identify the best management approaches to conservation and restoration of streams in urban catchments. Remediation of stormwater impacts is most likely to be achieved through widespread application of innovative approaches to drainage design. Because humans dominate urban ecosystems, research on urban stream ecology will require a broadening of stream ecological research to integrate with social, behavioral, and economic research.", "title": "" }, { "docid": "ef55f11664a16933166e55548598b939", "text": "In the paper, we present a new method for classifying documents with rigid geometry. Our approach is based on the fast and robust Viola-Jones object detection algorithm. The advantages of our proposed method are high speed, the possibility of automatic model construction using a training set, and processing of raw source images without any pre-processing steps such as draft recognition, layout analysis or binarisation. Furthermore, our algorithm allows not only to classify documents, but also to detect the placement and orientation of documents within an image.", "title": "" }, { "docid": "99a728e8b9a351734db9b850fe79bd61", "text": "Predicting anchor links across social networks has important implications to an array of applications, including cross-network information diffusion and cross-domain recommendation. One challenging problem is: whether and to what extent we can address the anchor link prediction problem, if only structural information of networks is available. Most existing methods, unsupervised or supervised, directly work on networks themselves rather than on their intrinsic structural regularities, and thus their effectiveness is sensitive to the high dimension and sparsity of networks. To offer a robust method, we propose a novel supervised model, called PALE, which employs network embedding with awareness of observed anchor links as supervised information to capture the major and specific structural regularities and further learns a stable cross-network mapping for predicting anchor links. Through extensive experiments on two realistic datasets, we demonstrate that PALE significantly outperforms the state-of-the-art methods.", "title": "" }, { "docid": "7e93c570c957a24ff4eb2132d691a8f1", "text": "Most of video-surveillance based applications use a foreground extraction algorithm to detect interest objects from videos provided by static cameras. This paper presents a benchmark dataset and evaluation process built from both synthetic and real videos, used in the BMC workshop (Background Models Challenge). This dataset focuses on outdoor situations with weather variations such as wind, sun or rain. Moreover, we propose some evaluation criteria and an associated free software to compute them from several challenging testing videos. The evaluation process has been applied for several state of the art algorithms like gaussian mixture models or codebooks.", "title": "" }, { "docid": "b7969a0c307b51dc563a165f267f1c8f", "text": "This study examined the overlap in teen dating violence and bullying perpetration and victimization, with regard to acts of physical violence, psychological abuse, and-for the first time ever-digitally perpetrated cyber abuse. A total of 5,647 youth (51% female, 74% White) from 10 schools participated in a cross-sectional anonymous survey. Results indicated substantial co-occurrence of all types of teen dating violence and bullying. Youth who perpetrated and/or experienced physical, psychological, and cyber bullying were likely to have also perpetrated/experienced physical and sexual dating violence, and psychological and cyber dating abuse.", "title": "" } ]
scidocsrr
39585e28426c98d401ffd3a38dd2b403
Proof Protocol for a Machine Learning Technique Making Longitudinal Predictions in Dynamic Contexts
[ { "docid": "6a2584657154d6c9fd0976c30469349a", "text": "A major challenge for managers in turbulent environments is to make sound decisions quickly. Dynamic capabilities have been proposed as a means for addressing turbulent environments by helping managers extend, modify, and reconfigure existing operational capabilities into new ones that better match the environment. However, because dynamic capabilities have been viewed as an elusive black box, it is difficult for managers to make sound decisions in turbulent environments if they cannot effectively measure dynamic capabilities. Therefore, we first seek to propose a measurable model of dynamic capabilities by conceptualizing, operationalizing, and measuring dynamic capabilities. Specifically, drawing upon the dynamic capabilities literature, we identify a set of capabilities—sensing the environment, learning, coordinating, and integrating— that help reconfigure existing operational capabilities into new ones that better match the environment. Second, we propose a structural model where dynamic capabilities influence performance by reconfiguring existing operational capabilities in the context of new product development (NPD). Data from 180 NPD units support both the measurable model of dynamic capabilities and also the structural model by which dynamic capabilities influence performance in NPD by reconfiguring operational capabilities, particularly in higher levels of environmental turbulence. The study’s implications for managerial decision making in turbulent environments by capturing the elusive black box of dynamic capabilities are discussed. Subject Areas: Decision Making in Turbulent Environments, Dynamic Capabilities, Environmental Turbulence, New Product Development, and Operational Capabilities.", "title": "" } ]
[ { "docid": "7ac6fea42fc232ea8effd09521da32a0", "text": "There appears to be growing consensus that Small Business Enterprises (SBEs) exert a major influence on the economy of Trinidad and Tobago. This study investigated how and to what extent small businesses influenced macroeconomic variables such as employment, growth and productivity in the important sectors of manufacturing and services. The paper used a methodology that traverses the reader though a combination of various literatures, and theories coupled with relevant statistics on small business. This process is aimed at accessing SBEs’ impact on the Trinidad & Tobago’s non-petroleum economy, especially as it relates to economic diversification. Although, there exists significant room for both improvement and expansion of these entities especially in light of the country’s dependence on hydro-carbons – this study provides collaborating evidence that small business enterprises perform an essential role in the future of Trinidad and Tobago’s economy. E c o n o m i c I m p a c t o f S B E s | 2", "title": "" }, { "docid": "fb5f04974fe6cf406ed955fb9ef0cac0", "text": "We motivate the need for a new requirements engineering methodology for systematically helping businesses and users to adopt cloud services and for mitigating risks in such transition. The methodology is grounded in goal oriented approaches for requirements engineering. We argue that Goal Oriented Requirements Engineering (GORE) is a promising paradigm to adopt for goals that are generic and flexible statements of users' requirements, which could be refined, elaborated, negotiated, mitigated for risks and analysed for economics considerations. We describe the steps of the proposed process and exemplify the use of the methodology through an example. The methodology can be used by small to large scale organisations to inform crucial decisions related to cloud adoption.", "title": "" }, { "docid": "6c8e1e77efea6fd82f9ec6146689a011", "text": "BACKGROUND\nHigh incidences of neck pain morbidity are challenging in various situations for populations based on their demographic, physiological and pathological characteristics. Chinese proprietary herbal medicines, as Complementary and Alternative Medicine (CAM) products, are usually developed from well-established and long-standing recipes formulated as tablets or capsules. However, good quantification and strict standardization are still needed for implementation of individualized therapies. The Qishe pill was developed and has been used clinically since 2009. The Qishe pill's personalized medicine should be documented and administered to various patients according to the ancient TCM system, a classification of personalized constitution types, established to determine predisposition and prognosis to diseases as well as therapy and life-style administration. Therefore, we describe the population pharmacokinetic profile of the Qishe pill and compare its metabolic rate in the three major constitution types (Qi-Deficiency, Yin-Deficiency and Blood-Stasis) to address major challenges to individualized standardized TCM.\n\n\nMETHODS/DESIGN\nHealthy subjects (N = 108) selected based on constitutional types will be assessed, and standardized pharmacokinetic protocol will be used for assessing demographic, physiological, and pathological data. Laboratory biomarkers will be evaluated and blood samples collected for pharmacokinetics(PK) analysis and second-generation gene sequencing. In single-dose administrations, subjects in each constitutional type cohort (N = 36) will be randomly divided into three groups to receive different Qishe pill doses (3.75, 7.5 and 15 grams). Multiomics, including next generation sequencing, metabolomics, and proteomics, will complement the Qishe pill's multilevel assessment, with cytochrome P450 genes as targets. In a comparison with the general population, a systematic population pharmacokinetic (PopPK) model for the Qishe pill will be established and verified.\n\n\nTRIAL REGISTRATION\nThis study is registered at ClinicalTrials.gov, NCT02294448 .15 November 2014.", "title": "" }, { "docid": "72b2bb4343c81576e208c2f678dae153", "text": "We propose a novel class of statistical divergences called Relaxed Wasserstein (RW) divergence. RW divergence generalizes Wasserstein divergence and is parametrized by a class of strictly convex and differentiable functions. We establish for RW divergence several probabilistic properties, which are critical for the success of Wasserstein divergence. In particular, we show that RW divergence is dominated by Total Variation (TV) and Wasserstein-L divergence, and that RW divergence has continuity, differentiability and duality representation. Finally, we provide a non-asymptotic moment estimate and a concentration inequality for RW divergence. Our experiments on image generation demonstrate that RW divergence is a suitable choice for GANs. The performance of RWGANs with Kullback-Leibler (KL) divergence is competitive with other state-of-the-art GANs approaches. Moreover, RWGANs possess better convergence properties than the existing WGANs with competitive inception scores. To the best of our knowledge, this new conceptual framework is the first to provide not only the flexibility in designing effective GANs scheme, but also the possibility in studying different loss functions under a unified mathematical framework.", "title": "" }, { "docid": "46a0a6e652d9a2dd684bd790db7ca4d5", "text": "An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly. Keywords—Bagging, Ensemble Learning, Extreme Learning Machine, Facial Expression Recognition, Histogram of Orientation Gradient", "title": "" }, { "docid": "8f25b3b36031653311eee40c6c093768", "text": "This paper provides a survey of the applications of computers in music teaching. The systems are classified by musical activity rather than by technical approach. The instructional strategies involved and the type of knowledge represented are highlighted and areas for future research are identified.", "title": "" }, { "docid": "99f0826db209b29fbbd38a0ec157954d", "text": "While Physics-Based Simulation (PBS) can highly accurately drape a 3D garment model on a 3D body, it remains too costly for real-time applications, such as virtual try-on. By contrast, inference in a deep network, that is, a single forward pass, is typically quite fast. In this paper, we leverage this property and introduce a novel architecture to fit a 3D garment template to a 3D body model. Specifically, we build upon the recent progress in 3D point-cloud processing with deep networks to extract garment features at varying levels of detail, including point-wise, patch-wise and global features. We then fuse these features with those extracted in parallel from the 3D body, so as to model the cloth-body interactions. The resulting two-stream architecture is trained with a loss function inspired by physics-based modeling, and delivers realistic garment shapes whose 3D points are, on average, less than 1.5cm away from those of a PBS method, while running 40 times faster.", "title": "" }, { "docid": "df80b751fa78e0631ca51f6199cc822c", "text": "OBJECTIVE\nHumane treatment and care of mentally ill people can be viewed from a historical perspective. Intramural (the institution) and extramural (the community) initiatives are not mutually exclusive.\n\n\nMETHOD\nThe evolution of the psychiatric institution in Canada as the primary method of care is presented from an historical perspective. A province-by-province review of provisions for mentally ill people prior to asylum construction reveals that humanitarian motives and a growing sensitivity to social and medical problems gave rise to institutional psychiatry. The influence of Great Britain, France, and, to a lesser extent, the United States in the construction of asylums in Canada is highlighted. The contemporary redirection of the Canadian mental health system toward \"dehospitalization\" is discussed and delineated.\n\n\nRESULTS\nEarly promoters of asylums were genuinely concerned with alleviating human suffering, which led to the separation of mental health services from the community and from those proffered to the criminal and indigent populations. While the results of the past institutional era were mixed, it is hoped that the \"care\" cycle will not repeat itself in the form of undesirable community alternatives.\n\n\nCONCLUSION\nSeverely psychiatrically disabled individuals can be cared for in the community if appropriate services exist.", "title": "" }, { "docid": "32ce2215040d6315f1442719b0fc353a", "text": "Introduction. Internal nasal valve incompetence (INVI) has been treated with various surgical methods. Large, single surgeon case series are lacking, meaning that the evidence supporting a particular technique has been deficient. We present a case series using alar batten grafts to reconstruct the internal nasal valve, all performed by the senior author. Methods. Over a 7-year period, 107 patients with nasal obstruction caused by INVI underwent alar batten grafting. Preoperative assessment included the use of nasal strips to evaluate symptom improvement. Visual analogue scale (VAS) assessment of nasal blockage (NB) and quality of life (QOL) both pre- and postoperatively were performed and analysed with the Wilcoxon signed rank test. Results. Sixty-seven patients responded to both pre- and postoperative questionnaires. Ninety-one percent reported an improvement in NB and 88% an improvement in QOL. The greatest improvement was seen at 6 months (median VAS 15 mm and 88 mm resp., with a P value of <0.05 for both). Nasal strips were used preoperatively and are a useful tool in predicting patient operative success in both NB and QOL (odds ratio 2.15 and 2.58, resp.). Conclusions. Alar batten graft insertion as a single technique is a valid technique in treating INVI and produces good outcomes.", "title": "" }, { "docid": "38a5b1d2e064228ec498cf64d29d80e5", "text": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.", "title": "" }, { "docid": "9ee7bba03c4875ee7947b5e13e3d6bfb", "text": "Connectivity has an important role in different discipline s of computer science including computer network. In the des ign of a network, it is important to analyze connections by the le v ls. The structural properties of bipolar fuzzy graphs pro vide a tool that allows for the solution of operations research problems. In this paper, we introduce various types of bipolar fuzzy brid ges, bipolar fuzzy cut-vertices, bipolar fuzzy cycles and bipolar fuzzy trees in bipolar fuzzy graphs, and investigate some of their prope rties. Most of these various types are defined in terms of levels. We also describe omparison of these types.", "title": "" }, { "docid": "837803a140450d594d5693a06ba3be4b", "text": "Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classified into four categories: treating people equally, favouring the worst-off, maximising total benefits, and promoting and rewarding social usefulness. No single principle is sufficient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and disability-adjusted life-years. We recommend an alternative system-the complete lives system-which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles.", "title": "" }, { "docid": "3953a1a05e064b8211fe006af4595e70", "text": "Sentiment analysis is a common task in natural language processing that aims to detect polarity of a text document (typically a consumer review). In the simplest settings, we discriminate only between positive and negative sentiment, turning the task into a standard binary classification problem. We compare several machine learning approaches to this problem, and combine them to achieve a new state of the art. We show how to use for this task the standard generative language models, which are slightly complementary to the state of the art techniques. We achieve strong results on a well-known dataset of IMDB movie reviews. Our results are easily reproducible, as we publish also the code needed to repeat the experiments. This should simplify further advance of the state of the art, as other researchers can combine their techniques with ours with little effort.", "title": "" }, { "docid": "e75669b68e8736ee6044443108c00eb1", "text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.", "title": "" }, { "docid": "dfe56abd5b8fcd1dc0e3a0dba02832b6", "text": "This paper presents a zero-voltage switching (ZVS) forward-flyback DC-DC converter, which is able to process and deliver power efficiently over very wide input voltage variation. The proposed ZVS forward flyback DC/DC converter is part of a Micro-inverter to perform input voltage regulation to achieving maximum power point tracking for Photo-voltaic panel. The converter operates at boundary between current continuous and discontinuous mode to achieve ZVS. Variable frequency with fixed off time is used for reducing core losses of the transformer, achieving high efficiency. In addition, non-dissipative LC snubber circuit is used to get both benefits: 1) the voltage spike is restrained effectively while the switch is turned off with high current in primary side; 2) the main switch still keeps ZVS feature. Finally, experiment results provided from a 200W prototype (30Vdc-50Vdc input, 230Vdc output) validate the feasibility and superior performance of the proposed converter.", "title": "" }, { "docid": "29fc090c5d1e325fd28e6bbcb690fb8d", "text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools", "title": "" }, { "docid": "1d9b50bf7fa39c11cca4e864bbec5cf3", "text": "FPGA-based embedded soft vector processors can exceed the performance and energy-efficiency of embedded GPUs and DSPs for lightweight deep learning applications. For low complexity deep neural networks targeting resource constrained platforms, we develop optimized Caffe-compatible deep learning library routines that target a range of embedded accelerator-based systems between 4 -- 8 W power budgets such as the Xilinx Zedboard (with MXP soft vector processor), NVIDIA Jetson TK1 (GPU), InForce 6410 (DSP), TI EVM5432 (DSP) as well as the Adapteva Parallella board (custom multi-core with NoC). For MNIST (28×28 images) and CIFAR10 (32×32 images), the deep layer structure is amenable to MXP-enhanced FPGA mappings to deliver 1.4 -- 5× higher energy efficiency than all other platforms. Not surprisingly, embedded GPU works better for complex networks with large image resolutions.", "title": "" }, { "docid": "2746379baa4c59fae63dc92a9c8057bc", "text": "Twenty-five Semantic Web and Database researchers met at the 2011 STI Semantic Summit in Riga, Latvia July 6-8, 2011[1] to discuss the opportunities and challenges posed by Big Data for the Semantic Web, Semantic Technologies, and Database communities. The unanimous conclusion was that the greatest shared challenge was not only engineering Big Data, but also doing so meaningfully. The following are four expressions of that challenge from different perspectives.", "title": "" }, { "docid": "8bb5794d38528ab459813ab1fa484a69", "text": "We introduce the ACL Anthology Network (AAN), a manually curated networked database of citations, collaborations, and summaries in the field of Computational Linguistics. We also present a number of statistics about the network including the most cited authors, the most central collaborators, as well as network statistics about the paper citation, author citation, and author collaboration networks.", "title": "" } ]
scidocsrr
77094e488d966e909bfbe54679c7923a
Investigating learners' attitudes toward virtual reality learning environments: Based on a constructivist approach
[ { "docid": "00e13bca1066e54907394b75cb40d0c0", "text": "This paper explores educational uses of virtual learning environment (VLE) concerned with issues of learning, training and entertainment. We analyze the state-of-art research of VLE based on virtual reality and augmented reality. Some examples for the purpose of education and simulation are described. These applications show that VLE can be means of enhancing, motivating and stimulating learners’ understanding of certain events, especially those for which the traditional notion of instructional learning have proven inappropriate or difficult. Furthermore, the users can learn in a quick and happy mode by playing in the virtual environments. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d7a31875b0d05c2bbd3522248d45ffbb", "text": "The trend of using e-learning as a learning and/or teaching tool is now rapidly expanding into education. Although e-learning environments are popular, there is minimal research on instructors’ and learners’ attitudes toward these kinds of learning environments. The purpose of this study is to explore instructors’ and learners’ attitudes toward e-learning usage. Accordingly, 30 instructors and 168 college students are asked to answer two different questionnaires for investigating their perceptions. After statistical analysis, the results demonstrate that instructors have very positive perceptions toward using e-learning as a teaching assisted tool. Furthermore, behavioral intention to use e-learning is influenced by perceived usefulness and self-efficacy. Regarding to learners’ attitudes, self-paced, teacher-led, and multimedia instruction are major factors to affect learners’ attitudes toward e-learning as an effective learning tool. Based on the findings, this research proposes guidelines for developing e-learning environments. 2006 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "01b73e9e8dbaf360baad38b63e5eae82", "text": "Received: 29 September 2009 Revised: 19 April 2010 2nd Revision: 5 July 2010 3rd Revision: 30 November 2010 Accepted: 8 December 2010 Abstract Throughout the world, sensitive personal information is now protected by regulatory requirements that have translated into significant new compliance oversight responsibilities for IT managers who have a legal mandate to ensure that individual employees are adequately prepared and motivated to observe policies and procedures designed to ensure compliance. This research project investigates the antecedents of information privacy policy compliance efficacy by individuals. Using Health Insurance Portability and Accountability Act compliance within the healthcare industry as a practical proxy for general organizational privacy policy compliance, the results of this survey of 234 healthcare professionals indicate that certain social conditions within the organizational setting (referred to as external cues and comprising situational support, verbal persuasion, and vicarious experience) contribute to an informal learning process. This process is distinct from the formal compliance training procedures and is shown to influence employee perceptions of efficacy to engage in compliance activities, which contributes to behavioural intention to comply with information privacy policies. Implications for managers and researchers are discussed. European Journal of Information Systems (2011) 20, 267–284. doi:10.1057/ejis.2010.72; published online 25 January 2011", "title": "" }, { "docid": "2e21f67f01a37394a9f208f7e6d8696e", "text": "We present a new neural sequence-tosequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new twolevel pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.", "title": "" }, { "docid": "6c1317ef88110756467a10c4502851bb", "text": "Deciding query equivalence is an important problem in data management with many practical applications. Solving the problem, however, is not an easy task. While there has been a lot of work done in the database research community in reasoning about the semantic equivalence of SQL queries, prior work mainly focuses on theoretical limitations. In this paper, we present COSETTE, a fully automated prover that can determine the equivalence of SQL queries. COSETTE leverages recent advances in both automated constraint solving and interactive theorem proving, and returns a counterexample (in terms of input relations) if two queries are not equivalent, or a proof of equivalence otherwise. Although the problem of determining equivalence for arbitrary SQL queries is undecidable, our experiments show that COSETTE can determine the equivalences of a wide range of queries that arise in practice, including conjunctive queries, correlated queries, queries with outer joins, and queries with aggregates. Using COSETTE, we have also proved the validity of magic set rewrites, and confirmed various real-world query rewrite errors, including the famous COUNT bug. We are unaware of any prior tool that can automatically determine the equivalences of a broad range of queries as COSETTE, and believe that our tool represents a major step towards building provably-correct query optimizers for real-world database systems.", "title": "" }, { "docid": "0a0e4219aa1e20886e69cb1421719c4e", "text": "A wearable two-antenna system to be integrated on a life jacket and connected to Personal Locator Beacons (PLBs) of the Cospas-Sarsat system is presented. Each radiating element is a folded meandered dipole resonating at 406 MHz and includes a planar reflector realized by a metallic foil. The folded dipole and the metallic foil are attached on the opposite sides of the floating elements of the life jacket itself, so resulting in a mechanically stable antenna. The metallic foil improves antenna radiation properties even when the latter is close to the sea surface, shields the human body from EM radiation and makes the radiating system less sensitive to the human body movements. Prototypes have been realized and a measurement campaign has been carried out. The antennas show satisfactory performance also when the life jacket is worn by a user. The proposed radiating elements are intended for the use in a two-antenna scheme in which the transmitter can switch between them in order to meet Cospas-Sarsat system specifications. Indeed, the two antennas provide complementary radiation patterns so that Cospas-Sarsat requirements (satellite constellation coverage and EIRP profile) are fully satisfied.", "title": "" }, { "docid": "8c2b0e93eae23235335deacade9660f0", "text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.", "title": "" }, { "docid": "6f26f4409d418fe69b1d43ec9b4f8b39", "text": "Automatic understanding of human affect using visual signals is of great importance in everyday human–machine interactions. Appraising human emotional states, behaviors and reactions displayed in real-world settings, can be accomplished using latent continuous dimensions (e.g., the circumplex model of affect). Valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the activation of the emotion) constitute popular and effective representations for affect. Nevertheless, the majority of collected datasets this far, although containing naturalistic emotional states, have been captured in highly controlled recording conditions. In this paper, we introduce the Aff-Wild benchmark for training and evaluating affect recognition algorithms. We also report on the results of the First Affect-in-the-wild Challenge (Aff-Wild Challenge) that was recently organized in conjunction with CVPR 2017 on the Aff-Wild database, and was the first ever challenge on the estimation of valence and arousal in-the-wild. Furthermore, we design and extensively train an end-to-end deep neural architecture which performs prediction of continuous emotion dimensions based on visual cues. The proposed deep learning architecture, AffWildNet, includes convolutional and recurrent neural network layers, exploiting the invariant properties of convolutional features, while also modeling temporal dynamics that arise in human behavior via the recurrent layers. The AffWildNet produced state-of-the-art results on the Aff-Wild Challenge. We then exploit the AffWild database for learning features, which can be used as priors for achieving best performances both for dimensional, as well as categorical emotion recognition, using the RECOLA, AFEW-VA and EmotiW 2017 datasets, compared to all other methods designed for the same goal. The database and emotion recognition models are available at http://ibug.doc.ic.ac.uk/resources/first-affect-wild-challenge .", "title": "" }, { "docid": "4added2e0e6ba286a1ef4bed1dfd6614", "text": "Estimating the mechanisms that connect explanatory variables with the explained variable, also known as “mediation analysis,” is central to a variety of social-science fields, especially psychology, and increasingly to fields like epidemiology. Recent work on the statistical methodology behind mediation analysis points to limitations in earlier methods. We implement in Stata computational approaches based on recent developments in the statistical methodology of mediation analysis. In particular, we provide functions for the correct calculation of causal mediation effects using several different types of parametric models, as well as the calculation of sensitivity analyses for violations to the key identifying assumption required for interpreting mediation results causally.", "title": "" }, { "docid": "29ce9730d55b55b84e195983a8506e5c", "text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.", "title": "" }, { "docid": "9cedc3f1a04fa51fb8ce1cf0cf01fbc3", "text": "OBJECTIVES:The objective of this study was to provide updated explicit and relevant consensus statements for clinicians to refer to when managing hospitalized adult patients with acute severe ulcerative colitis (UC).METHODS:The Canadian Association of Gastroenterology consensus group of 23 voting participants developed a series of recommendation statements that addressed pertinent clinical questions. An iterative voting and feedback process was used to do this in conjunction with systematic literature reviews. These statements were brought to a formal consensus meeting held in Toronto, Ontario (March 2010), when each statement was discussed, reformulated, voted upon, and subsequently revised until group consensus (at least 80% agreement) was obtained. The modified GRADE (Grading of Recommendations Assessment, Development, and Evaluation) criteria were used to rate the strength of recommendations and the quality of evidence.RESULTS:As a result of the iterative process, consensus was reached on 21 statements addressing four themes (General considerations and nutritional issues, Steroid use and predictors of steroid failure, Cyclosporine and infliximab, and Surgical issues).CONCLUSIONS:Key recommendations for the treatment of hospitalized patients with severe UC include early escalation to second-line medical therapy with either infliximab or cyclosporine in individuals in whom parenteral steroids have failed after 72 h. These agents should be used in experienced centers where appropriate support is available. Sequential therapy with cyclosporine and infliximab is not recommended. Surgery is an option when first-line steroid therapy fails, and is indicated when second-line medical therapy fails and/or when complications arise during the hospitalization.", "title": "" }, { "docid": "97c6914243c061491bc27837d2fdae2d", "text": "During the last two years, the METIS project (\"Mobile and wireless communications Enablers for the Twenty-twenty Information Society\") has been conducting research on 5G-enabling technology components. This paper provides a summary of METIS work on 5G architectures. The architecture description is presented from different viewpoints. First, a functional architecture is presented that may lay a foundation for development of first novel 5G network functions. It is based on functional decomposition of most relevant 5G technology components provided by METIS. The logical orchestration & control architecture depicts the realization of flexibility, scalability and service orientation needed to fulfil diverse 5G requirements. Finally, a third viewpoint reveals deployment aspects and function placement options for 5G.", "title": "" }, { "docid": "e4dbca720626a29f60a31ed9d22c30aa", "text": "Text classification is the process of classifying documents into predefined categories based on their content. It is the automated assignment of natural language texts to predefined categories. Text classification is the primary requirement of text retrieval systems, which retrieve texts in response to a user query, and text understanding systems, which transform text in some way such as producing summaries, answering questions or extracting data. Existing supervised learning algorithms to automatically classify text need sufficient documents to learn accurately. This paper presents a new algorithm for text classification using data mining that requires fewer documents for training. Instead of using words, word relation i.e. association rules from these words is used to derive feature set from pre-classified text documents. The concept of Naïve Bayes classifier is then used on derived features and finally only a single concept of Genetic Algorithm has been added for final classification. A system based on the proposed algorithm has been implemented and tested. The experimental results show that the proposed system works as a successful text classifier.", "title": "" }, { "docid": "82bb1d74e1e2d4b7b412b2a921f5eaad", "text": "This paper addresses the topic of community crime prevention. As in many other areas of public policy, there are widely divergent approaches that might be taken to crime prevention focused on local neighbourhoods. In what follows four major forms of prevention relevant to the Australian context will be discussed, with particular emphasis being placed on an approach to crime prevention which enhances the ability of a community to bring together, in an integrative way, divergent groups which can easily become isolated from each other as a result of contemporary economic and urban forces.", "title": "" }, { "docid": "fea4f7992ec61eaad35872e3a800559c", "text": "The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Mismatches often occur between the learning styles of students in a language class and the teaching style of the instructor, with unfortunate effects on the quality of the students’ learning and on their attitudes toward the class and the subject. This paper defines several dimensions of learning style thought to be particularly relevant to foreign and second language education, outlines ways in which certain learning styles are favored by the teaching styles of most language instructors, and suggests steps to address the educational needs of all students in foreign language classes. Students learn in many ways—by seeing and hearing; reflecting and acting; reasoning logically and intuitively; memorizing and visualizing. Teaching methods also vary. Some instructors lecture, others demonstrate or discuss; some focus on rules and others on examples; some emphasize memory and others understanding. How much a given student learns in a class is governed in part by that student’s native ability and prior preparation but also by the compatibility of his or her characteristic approach to learning and the instructor’s characteristic approach to teaching. The ways in which an individual characteristically acquires, retains, and retrieves information are collectively termed the individual’s learning style. Learning styles have been extensively discussed in the educational psychology literature (Claxton & Murrell 1987; Schmeck 1988) and specifically in the context Richard M. Felder (Ph.D., Princeton University) is the Hoechst Celanese Professor of Chemical Engineering at North Carolina State University,", "title": "" }, { "docid": "326b8d8d5d128706796d3107a6c2c941", "text": "Capturing security and privacy requirements in the early stages of system development is essential for creating sufficient public confidence in order to facilitate the adaption of novel systems such as the Internet of Things (IoT). However, security and privacy requirements are often not handled properly due to their wide variety of facets and aspects which make them difficult to formulate. In this study, security-related requirements of IoT heterogeneous systems are decomposed into a taxonomy of quality attributes, and existing security mechanisms and policies are proposed to alleviate the identified forms of security attacks and to reduce the vulnerabilities in the future development of the IoT systems. Finally, the taxonomy is applied on an IoT smart grid scenario.", "title": "" }, { "docid": "42fd940e239ed3748b007fde8b583b25", "text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "5c5c21bd0c50df31c6ccec63d864568c", "text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.", "title": "" }, { "docid": "7e6474de31f7d9cdee552a50a09bbeae", "text": "BACKGROUND Demographics in America are beginning to shift toward an older population, with the number of patients aged 65 years or older numbering approximately 41.4 million in 2011, which represents an increase of 18% since 2000. Within the aging population, the incidence of vocal disorders is estimated to be between 12% and 35%. In a series reported by Davids et al., 25% of patients over age 65 years presenting with a voice complaint were found to have vocal fold atrophy (presbylarynges), where the hallmark physical signs are vocal fold bowing with an increased glottic gap and prominent vocal processes. The epithelial and lamina propria covering of the vocal folds begin to exhibit changes due to aging. In older adults, the collagen of the vocal folds lose their “wicker basket” type of organization, which leads to more disarrayed segments throughout all the layers of the lamina propria, and there is also a loss of hyaluronic acid and elastic fibers. With this loss of the viscoelastic properties and subsequent vocal fold thinning, along with thyroarytenoid muscle atrophy, this leads to the classic bowed membranous vocal fold. Physiologically, these anatomical changes to the vocal folds leads to incomplete glottal closure, air escape, changes in vocal fold tension, altered fundamental frequency, and decreased vocal endurance. Women’s voices will often become lower pitched initially and then gradually higher pitched and shrill, whereas older men’s voices will gradually become more high pitched as the vocal folds lengthen to try and achieve approximation. LITERATURE REVIEW The literature documents that voice therapy is a useful tool in the treatment of presbyphonia and improves voice-related quality of life. The goal of therapy is based on a causal model that suggests targeting the biological basis of the condition—degenerative respiratory and laryngeal changes—as a result of sarcopenia. Specifically, the voice therapy protocol should capitalize on high-intensity phonatory exercises to overload the respiratory and laryngeal system and improve vocal loudness, reduce vocal effort, and increase voice-related quality of life (VRQoL). In a small prospective, randomized, controlled trial, Ziegler et al. demonstrated that patients with vocal atrophy undergoing therapy—phonation resistance training exercise (PhoRTE) or vocal function exercise (VFE)—had a significantly improved VRQoL score preand post-therapy (88.5–95.0, P 5.049 for PhoRTE and 80.8–87.5, P 5.054 for VFE), whereas patients in the nonintervention group saw no improvement (87.5–91.5, P 5.70). Patients in the PhoRTE group exhibited a significant decrease in perceived phonatory effort, but not patients undergoing VFE or no therapy. Injection laryngoplasty (IL), initially developed for restoration of glottic competence in vocal fold paralysis, has also been increasingly used in treatment of the aging voice. A number of materials have been used over the years including Teflon, silicone, fat, Gelfoam, collagen, hyaluronic acid, carboxymethylcellulose, and calcium hydroxylapatite. Some of these are limited by safety or efficacy concerns, and some of them are not long lasting. With the growing use of in-office IL, the ease of use has made this technique more popular because of the ability to avoid general anesthesia in a sometimes already frail patient population. Davids et al. also examined changes in VRQoL scores for patients undergoing IL and demonstrated a significant improvement preand post-therapy (34.8 vs. 22, P<.0001). Due to a small sample size, however, the authors were unable to make any direct comparisons between patients undergoing voice therapy versus IL. Medialization thyroplasty (MT) remains as the otolaryngologist’s permanent technique for addressing the glottal insufficiency found in the aging larynx. In the same fashion as IL, the technique developed as a way to address the paralytic vocal fold and can use either Silastic or Gore-Tex implants. Postma et al. looked at the From the Emory Voice Center, Department of Otolaryngology/ Head and Neck Surgery, Emory University School of Medicine, Atlanta, Georgia, U.S.A. This work was performed at the Emory Voice Center in the Department of Otolaryngology/Head and Neck Surgery at the Emory School of Medicine in Atlanta, Georgia. This work was funded internally by the Emory Voice Center. The authors have no other funding, financial relationships, or conflicts of interest to disclose. Joseph Bradley has worked as a consultant for Merz Aesthetics teaching a vocal fold injection laryngoplasty course. Send correspondence to Michael M. Johns, III, MD, Emory University School of Medicine, 550 Peachtree St. NE, 9th Floor, Suite 4400, Atlanta, GA 30308. E-mail: [email protected]", "title": "" }, { "docid": "4571c73ba3182ad93d1fbcb9b5827dfc", "text": "The use of consumer IT at work, known as \"IT Consumerization\", is changing the dynamic between employees and IT departments. Employees are empowered and now have greater freedom but also responsibility as to what technologies to use at work and how. At the same time, organizational factors such as rules on technology use still exert considerable influence on employees' actions. Drawing on Structuration Theory, we frame this interaction between organization and employee as one of structure and agency. In the process, we pursue an explorative approach and rely on qualitative data from interviews with public-sector employees. We identify four organizational structures that influence people's behavior with respect to IT Consumerization: policies, equipment, tasks and authority. By spotlighting the mutual influence of these organizational structures and Consumerization-related behavior, we show Giddens's duality of structure in action and demonstrate its relevance to the study of IT Consumerization.", "title": "" } ]
scidocsrr
26f450aebd739886384c0bb8c543bc2d
Agile Project Management: A Case Study of a Virtual Research Environment Development Project
[ { "docid": "67d704317471c71842a1dfe74ddd324a", "text": "Agile software development methods have caught the attention of software engineers and researchers worldwide. Scientific research is yet scarce. This paper reports results from a study, which aims to organize, analyze and make sense out of the dispersed field of agile software development methods. The comparative analysis is performed using the method's life-cycle coverage, project management support, type of practical guidance, fitness-for-use and empirical evidence as the analytical lenses. The results show that agile software development methods, without rationalization, cover certain/different phases of the software development life-cycle and most of the them do not offer adequate support for project management. Yet, many methods still attempt to strive for universal solutions (as opposed to situation appropriate) and the empirical evidence is still very limited Based on the results, new directions are suggested In principal it is suggested to place emphasis on methodological quality -- not method quantity.", "title": "" } ]
[ { "docid": "1ab0f5075fc35f07b7e79786f459f7ba", "text": "In this paper, the impact of the response of a wind farm (WF) on the operation of a nearby grid is investigated during network disturbances. Only modern variable speed wind turbines are treated in this work. The new E.ON Netz fault response code for WF is taken as the base case for the study. The results found in this paper are that the performance of the used Cigre 32-bus test system during disturbances is improved when the WF is complying with the E.ON code compared to the traditional unity power factor operation. Further improvements are found when the slope of the reactive current support line is increased from the E.ON specified value. In addition, a larger converter of a variable speed wind turbine is exploited that is to be used in order to improve the stability of a nearby grid by extending the reactive current support. By doing so, it is shown in this paper that the voltage profile at the point of common coupling (pcc) as well as the transient stability of the grid are improved compared to the original E.ON code, in addition to the improvements already achieved by using the E.ON code in its original form. Finally, regarding the utilization of a larger converter, it is important to point out that the possible reactive power in-feed into the pcc from an offshore WF decreases with increasing cable length during network faults, making it difficult to support the grid with extra reactive power during disturbances.", "title": "" }, { "docid": "e6e0452c62ec807df99aadf660e3193d", "text": "Bacteria have been widely used as starter cultures in the food industry, notably for the fermentation of milk into dairy products such as cheese and yogurt. Lactic acid bacteria used in food manufacturing, such as lactobacilli, lactococci, streptococci, Leuconostoc, pediococci, and bifidobacteria, are selectively formulated based on functional characteristics that provide idiosyncratic flavor and texture attributes, as well as their ability to withstand processing and manufacturing conditions. Unfortunately, given frequent viral exposure in industrial environments, starter culture selection and development rely on defense systems that provide resistance against bacteriophage predation, including restriction-modification, abortive infection, and recently discovered CRISPRs (clustered regularly interspaced short palindromic repeats). CRISPRs, together with CRISPR-associated genes (cas), form the CRISPR/Cas immune system, which provides adaptive immunity against phages and invasive genetic elements. The immunization process is based on the incorporation of short DNA sequences from virulent phages into the CRISPR locus. Subsequently, CRISPR transcripts are processed into small interfering RNAs that guide a multifunctional protein complex to recognize and cleave matching foreign DNA. Hypervariable CRISPR loci provide insights into the phage and host population dynamics, and new avenues for enhanced phage resistance and genetic typing and tagging of industrial strains.", "title": "" }, { "docid": "8674128201d80772040446f1ab6a7cd1", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "7c08e0580557961573e95cf0d794634c", "text": "Nowadays, the development of traditional business models become more and more mature that people use them to guide various kinds of E-business activities. Internet of things(IoT), being an innovative revolution over the Internet, becomes a new platform for E-business. However, old business models could hardly fit for the E-business on the IoT. In this article, we 1) propose an IoT E-business model, which is specially designed for the IoT E-business; 2) redesign many elements in traditional E-business models; 3) realize the transaction of smart property and paid data on the IoT with the help of P2P trade based on the Blockchain and smart contract. We also experiment our design and make a comprehensive discuss.", "title": "" }, { "docid": "a44264e4c382204606fdb140ab485617", "text": "Atrophoderma vermiculata is a rare genodermatosis with usual onset in childhood, characterized by a \"honey-combed\" reticular atrophy of the cheeks. The course is generally slow, with progressive worsening. We report successful treatment of 2 patients by means of the carbon dioxide and 585 nm pulsed dye lasers.", "title": "" }, { "docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a", "text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.", "title": "" }, { "docid": "233df6d1258d73854f9555f22826a766", "text": "The NARX network is a dynamical neural architecture commonly used for input-output modeling of nonlinear dynamical systems. When applied to time series prediction, the NARX network is designed as a feedforward time delay neural network (TDNN), i.e., without the feedback loop of delayed outputs, reducing substantially its predictive performance. In this paper, it is shown that the original architecture of the NARX network can be easily and efficiently applied to prediction of time series using embedding theory to reconstruct the input of NARX network. We evaluate the proposed approach using a real-world data set, which is the vibration data measured from a Co2 compressor. The results show that the proposed approach consistently outperforms standard neural network based predictors, such as the TDNN architecture.", "title": "" }, { "docid": "073c17ac03485bc79d02990e87458743", "text": "Capital structure is most significant discipline of company’s operations. This researcher constitutes an attempt to identify the impact between Capital Structure and Companies Performance, taking into consideration the level of Companies Financial Performance. The analyze has been made the capital structure and its impact on Financial Performance capacity during 2005 to 2009 (05 years) financial year of Business companies in Sri Lanka. The results shown the relationship between the capital structure and financial performance is negative association at -0.114. Co-efficient of determination is 0.013. F and t values are 0.366, -0.605 respectively. It is reflect the insignificant level of the Business Companies in Sri Lanka. Hence Business companies mostly depend on the debt capital. Therefore, they have to pay interest expenses much.", "title": "" }, { "docid": "7e127a6f25e932a67f333679b0d99567", "text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.", "title": "" }, { "docid": "f8aeaf04486bdbc7254846d95e3cab24", "text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.", "title": "" }, { "docid": "af43b52ef9d996d4f6f6196d6b201c91", "text": "The comparison between TCP and UDP tunnels have not been sufficiently reported in the scientific literature. In this work, we use OpenVPN as a platform to demonstrate the performance between TCP/UDP. The de facto belief has been that TCP tunnel provides a permanent tunnel and therefore ensures a reliable transfer of data between two end points. However the effects of transmitting TCP within a UDP tunnel has been explored and could provide a valuable attempt. The results provided in this paper demonstrates that indeed TCP in UDP tunnel provides better latency. Throughout this paper, a series of tests have been performed, UDP traffic was sent inside UDP tunnel and TCP tunnel successively. The same tests was performed using TCP traffic.", "title": "" }, { "docid": "5275184686a8453a1922cec7a236b66d", "text": "Children’s sense of relatedness is vital to their academic motivation from 3rd to 6th grade. Children’s (n 641) reports of relatedness predicted changes in classroom engagement over the school year and contributed over and above the effects of perceived control. Regression and cumulative risk analyses revealed that relatedness to parents, teachers, and peers each uniquely contributed to students’ engagement, especially emotional engagement. Girls reported higher relatedness than boys, but relatedness to teachers was a more salient predictor of engagement for boys. Feelings of relatedness to teachers dropped from 5th to 6th grade, but the effects of relatedness on engagement were stronger for 6th graders. Discussion examines theoretical, empirical, and practical implications of relatedness as a key predictor of children’s academic motivation and performance.", "title": "" }, { "docid": "912a05d1ee733d85d3dbe6b63c986a44", "text": "Keyphrases efficiently summarize a document’s content and are used in various document processing and retrieval tasks. Several unsupervised techniques and classifiers exist for extracting keyphrases from text documents. Most of these methods operate at a phrase-level and rely on part-of-speech (POS) filters for candidate phrase generation. In addition, they do not directly handle keyphrases of varying lengths. We overcome these modeling shortcomings by addressing keyphrase extraction as asequential labelingtask in this paper. We explore a basic set of features commonly used in NLP tasks as well as predictions from various unsupervised methods to train our taggers. In addition to a more natural modeling for the keyphrase extraction problem, we show that tagging models yield significant performance benefits over existing stateof-the-art extraction methods.", "title": "" }, { "docid": "e4b3e5fa0820dbbe07f1ac005dc796dd", "text": "Alzheimer's disease is an irreversible, progressive neurodegenerative disorder. Various therapeutic approaches are being used to improve the cholinergic neurotransmission, but their role in AD pathogenesis is still unknown. Although, an increase in tau protein concentration in CSF has been described in AD, but several issues remains unclear. Extensive and accurate analysis of CSF could be helpful to define presence of tau proteins in physiological conditions, or released during the progression of neurodegenerative disease. The amyloid cascade hypothesis postulates that the neurodegeneration in AD caused by abnormal accumulation of amyloid beta (Aβ) plaques in various areas of the brain. The amyloid hypothesis has continued to gain support over the last two decades, particularly from genetic studies. Therefore, current research progress in several areas of therapies shall provide an effective treatment to cure this devastating disease. This review critically evaluates general biochemical and physiological functions of Aβ directed therapeutics and their relevance.", "title": "" }, { "docid": "e6e19f678bfe46d8390e32f28f1d675d", "text": "In this paper, a miniaturized printed dipole antenna with the V-shaped ground is proposed for radio frequency identification (RFID) readers operating at the frequency of 2.45 GHz. The principles of the microstrip balun and the printed dipole are analyzed and design considerations are formulated. Through extending and shaping the ground to reduce the coupling between the balun and the dipole, the antenna’s impedance bandwidth is broadened and the antenna’s radiation pattern is improved. The 3D finite difference time domain (FDTD) Electromagnetic simulations are carried out to evaluate the antenna’s performance. The effects of the extending angle and the position of the ground are investigated to obtain the optimized parameters. The antenna was fabricated and measured in a microwave anechoic chamber. The results show that the proposed antenna achieves a broader impedance bandwidth, a higher forward radiation gain and a stronger suppression to backward radiation compared with the one without such a ground.", "title": "" }, { "docid": "8a45e83904913f8e4fbb7c59ff5d056c", "text": "The present article examines the nature and function of human agency within the conceptual model of triadic reciprocal causation. In analyzing the operation of human agency in this interactional causal structure, social cognitive theory accords a central role to cognitive, vicarious, self-reflective, and self-regulatory processes. The issues addressed concern the psychological mechanisms through which personal agency is exercised, the hierarchical structure of self-regulatory systems, eschewal of the dichotomous construal of self as agent and self as object, and the properties of a nondualistic but nonreductional conception of human agency. The relation of agent causality to the fundamental issues of freedom and determinism is also analyzed.", "title": "" }, { "docid": "c5d06fe50c16278943fe1df7ad8be888", "text": "Current main memory organizations in embedded and mobile application systems are DRAM dominated. The ever-increasing gap between today's processor and memory speeds makes the DRAM subsystem design a major aspect of computer system design. However, the limitations to DRAM scaling and other challenges like refresh provide undesired trade-offs between performance, energy and area to be made by architecture designers. Several emerging NVM options are being explored to at least partly remedy this but today it is very hard to assess the viability of these proposals because the simulations are not fully based on realistic assumptions on the NVM memory technologies and on the system architecture level. In this paper, we propose to use realistic, calibrated STT-MRAM models and a well calibrated cross-layer simulation and exploration framework, named SEAT, to better consider technologies aspects and architecture constraints. We will focus on general purpose/mobile SoC multi-core architectures. We will highlight results for a number of relevant benchmarks, representatives of numerous applications based on actual system architecture. The most energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 27% at the cost of 2x the area and the least energy efficient STT-MRAM based main memory proposal provides an average energy consumption reduction of 8% at the around the same area or lesser when compared to DRAM.", "title": "" }, { "docid": "e0160911f70fa836f64c08f721f6409e", "text": "Today’s openly available knowledge bases, such as DBpedia, Yago, Wikidata or Freebase, capture billions of facts about the world’s entities. However, even the largest among these (i) are still limited in up-to-date coverage of what happens in the real world, and (ii) miss out on many relevant predicates that precisely capture the wide variety of relationships among entities. To overcome both of these limitations, we propose a novel approach to build on-the-fly knowledge bases in a query-driven manner. Our system, called QKBfly, supports analysts and journalists as well as question answering on emerging topics, by dynamically acquiring relevant facts as timely and comprehensively as possible. QKBfly is based on a semantic-graph representation of sentences, by which we perform three key IE tasks, namely named-entity disambiguation, co-reference resolution and relation extraction, in a light-weight and integrated manner. In contrast to Open IE, our output is canonicalized. In contrast to traditional IE, we capture more predicates, including ternary and higher-arity ones. Our experiments demonstrate that QKBfly can build high-quality, on-the-fly knowledge bases that can readily be deployed, e.g., for the task of ad-hoc question answering. PVLDB Reference Format: D. B. Nguyen, A. Abujabal, N. K. Tran, M. Theobald, and G. Weikum. Query-Driven On-The-Fly Knowledge Base Construction. PVLDB, 11 (1): 66-7 , 2017. DOI: 10.14778/3136610.3136616", "title": "" }, { "docid": "0eff5b8ec08329b4a5d177baab1be512", "text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.", "title": "" } ]
scidocsrr
e954ead1414ffadb605dc7f7c7893d66
8×8 Phased series fed patch antenna array at 28 GHz for 5G mobile base station antennas
[ { "docid": "9900d928d601e62cf8480cb28d3574e9", "text": "Cellular technology has dramatically changed our society and the way we communicate. First it impacted voice telephony, and then has been making inroads into data access, applications, and services. However, today potential capabilities of the Internet have not yet been fully exploited by cellular systems. With the advent of 5G we will have the opportunity to leapfrog beyond current Internet capabilities.", "title": "" }, { "docid": "15b38be44110ded3407b152af2f65457", "text": "What will 5G be? What it will not be is an incremental advance on 4G. The previous four generations of cellular technology have each been a major paradigm shift that has broken backward compatibility. Indeed, 5G will need to be a paradigm shift that includes very high carrier frequencies with massive bandwidths, extreme base station and device densities, and unprecedented numbers of antennas. However, unlike the previous four generations, it will also be highly integrative: tying any new 5G air interface and spectrum together with LTE and WiFi to provide universal high-rate coverage and a seamless user experience. To support this, the core network will also have to reach unprecedented levels of flexibility and intelligence, spectrum regulation will need to be rethought and improved, and energy and cost efficiencies will become even more critical considerations. This paper discusses all of these topics, identifying key challenges for future research and preliminary 5G standardization activities, while providing a comprehensive overview of the current literature, and in particular of the papers appearing in this special issue.", "title": "" }, { "docid": "ed676ff14af6baf9bde3bdb314628222", "text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.", "title": "" } ]
[ { "docid": "233427420d0ff900736ca0692b281ed5", "text": "Machine learning is useful for grid-based crime prediction. Many previous studies have examined factors including time, space, and type of crime, but the geographic characteristics of the grid are rarely discussed, leaving prediction models unable to predict crime displacement. This study incorporates the concept of a criminal environment in grid-based crime prediction modeling, and establishes a range of spatial-temporal features based on 84 types of geographic information by applying the Google Places API to theft data for Taoyuan City, Taiwan. The best model was found to be Deep Neural Networks, which outperforms the popular Random Decision Forest, Support Vector Machine, and K-Near Neighbor algorithms. After tuning, compared to our design’s baseline 11-month moving average, the F1 score improves about 7% on 100-by-100 grids. Experiments demonstrate the importance of the geographic feature design for improving performance and explanatory ability. In addition, testing for crime displacement also shows that our model design outperforms the baseline.", "title": "" }, { "docid": "00331b2412192410166fc82af2253eb8", "text": "In an effort toward standardization, this paper evaluates the performance of five eye-movement classification algorithms in terms of their assessment of oculomotor fixation and saccadic behavior. The results indicate that performance of these five commonly used algorithms vary dramatically, even in the case of a simple stimulus-evoked task using a single, common threshold value. The important contributions of this paper are: evaluation and comparison of performance of five algorithms to classify specific oculomotor behavior; introduction and comparison of new standardized scores to provide more reliable classification performance; logic for a reasonable threshold-value selection for any eye-movement classification algorithm based on the standardized scores; and logic for establishing a criterion-based baseline for performance comparison between any eye-movement classification algorithms. Proposed techniques enable efficient and objective clinical applications providing means to assure meaningful automated eye-movement classification.", "title": "" }, { "docid": "18b78d3b94b077c481792d51b73549b0", "text": "In recent years a number of solvers for the direct solution of large sparse symmetric linear systems of equations have been developed. These include solvers that are designed for the solution of positive definite systems as well as those that are principally intended for solving indefinite problems. In this study, we use performance profiles as a tool for evaluating and comparing the performance of serial sparse direct solvers on an extensive set of symmetric test problems taken from a range of practical applications.", "title": "" }, { "docid": "100152f120c93e5845bd11eb66d3d46b", "text": "Mobile Augmented Reality (AR) applications allow the user to interact with virtual objects positioned within the real world via a smart phone, tablet or smart glasses. As the popularity of these applications grows, recent researchers have identified several security and privacy issues pertaining to the collection and storage of sensitive data from device sensors. Location-based AR applications typically not only collect user location data, but transmit it to a remote server in order to download nearby virtual content. In this paper we show that the pattern of network traffic generated by this process alone can be used to infer the user’s location. We demonstrate a side-channel attack against a widely available Mobile AR application inspired by Website Fingerprinting methods. Through the strategic placement of virtual content and prerecording of the network traffic produced by interacting with this content, we are able to identify the location of a user within the target area with an accuracy of 94%.This finding reveals a previously unexplored vulnerability in the implementation of Mobile AR applications and we offer several recommendations to mitigate this threat.", "title": "" }, { "docid": "4b2b199aeb61128cbee7691bc49e16f5", "text": "Although deep learning approaches have achieved performance surpassing humans for still image-based face recognition, unconstrained video-based face recognition is still a challenging task due to large volume of data to be processed and intra/inter-video variations on pose, illumination, occlusion, scene, blur, video quality, etc. In this work, we consider challenging scenarios for unconstrained video-based face recognition from multiple-shot videos and surveillance videos with low-quality frames. To handle these problems, we propose a robust and efficient system for unconstrained video-based face recognition, which is composed of face/fiducial detection, face association, and face recognition. First, we use multi-scale single-shot face detectors to efficiently localize faces in videos. The detected faces are then grouped respectively through carefully designed face association methods, especially for multi-shot videos. Finally, the faces are recognized by the proposed face matcher based on an unsupervised subspace learning approach and a subspace-tosubspace similarity metric. Extensive experiments on challenging video datasets, such as Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), JANUS Challenge Set 6 (CS6) for low-quality surveillance videos and IARPA JANUS Benchmark B (IJB-B) for multiple-shot videos, demonstrate that the proposed system can accurately detect and associate faces from unconstrained videos and effectively learn robust and discriminative features for recognition.", "title": "" }, { "docid": "50603dae3b5131ba4e6d956d57402e10", "text": "Due to the spread of color laser printers to the general public, numerous forgeries are made by color laser printers. Printer identification is essential to preventing damage caused by color laser printed forgeries. This paper presents a new method to identify a color laser printer using photographed halftone images. First, we preprocess the photographed images to extract the halftone pattern regardless of the variation of the illumination conditions. Then, 15 halftone texture features are extracted from the preprocessed images. A support vector machine is used to be trained and classify the extracted features. Experiments are performed on seven color laser printers. The experimental results show that the proposed method is suitable for identifying the source color laser printer using photographed images.", "title": "" }, { "docid": "8a634e7bf127f2a90227c7502df58af0", "text": "A convex channel surface with Si0.8Ge0.2 is proposed to enhance the retention time of a capacitorless DRAM Generation 2 type of capacitorless DRAM cell. This structure provides a physical well together with an electrostatic barrier to more effectively store holes and thereby achieve larger sensing margin as well as retention time. The advantages of this new cell design as compared with the planar cell design are assessed via twodimensional device simulations. The results indicate that the convex heterojunction channel design is very promising for future capacitorless DRAM. Keywords-Capacitorless DRAM; Retention Time; Convex Channel; Silicon Germanium;", "title": "" }, { "docid": "c0dbd6356ead3a9542c9ec20dd781cc7", "text": "This paper aims to address the importance of supportive teacher–student interactions within the learning environment. This will be explored through the three elements of the NSW Quality Teaching Model; Intellectual Quality, Quality Learning Environment and Significance. The paper will further observe the influences of gender on the teacher–student relationship, as well as the impact that this relationship has on student academic outcomes and behaviour. Teacher–student relationships have been found to have immeasurable effects on students’ learning and their schooling experience. This paper examines the ways in which educators should plan to improve their interactions with students, in order to allow for quality learning. This journal article is available in Journal of Student Engagement: Education Matters: http://ro.uow.edu.au/jseem/vol2/iss1/2 Journal of Student Engagement: Education matters 2012, 2 (1), 2–9 Lauren Liberante 2 The importance of teacher–student relationships, as explored through the lens of the NSW Quality Teaching Model", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "38693524e69d494b95c311840d599c93", "text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with the hashtag ‘#sarcasm’. We collected a training corpus of about 78 thousand Dutch tweets with this hashtag. Assuming that the human labeling is correct (annotation of a sample indicates that about 85% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a test set of a day’s stream of 3.3 million Dutch tweets. Of the 135 explicitly marked tweets on this day, we detect 101 (75%) when we remove the hashtag. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 30% of the top-250 ranked tweets are indeed sarcastic. Analysis shows that sarcasm is often signalled by hyperbole, using intensifiers and exclamations; in contrast, non-hyperbolic sarcastic messages often receive an explicit marker. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of nonverbal expressions that people employ in live interaction when conveying sarcasm.", "title": "" }, { "docid": "6718aa2ef60a8b7ebf47dbcaffd87848", "text": "One of the main development resources for website engineers are Web templates. Templates allow them to increase productivity by plugin content into already formatted and prepared pagelets. For the final user templates are also useful, because they provide uniformity and a common look and feel for all webpages. However, from the point of view of crawlers and indexers, templates are an important problem, because templates usually contain irrelevant information such as advertisements, menus, and banners. Processing and storing this information leads to a waste of resources (storage space, bandwidth, etc.). It has been measured that templates represent between 40% and 50% of data on the Web. Therefore, identifying templates is essential for indexing tasks. In this work we propose a novel method for automatic web template extraction that is based on similarity analysis between the DOM trees of a collection of webpages that are detected using an hyperlink analysis. Our implementation and experiments demonstrate the usefulness of the technique.", "title": "" }, { "docid": "bf6a5ff65a60da049c6024375e2effb6", "text": "This document updates RFC 4944, \"Transmission of IPv6 Packets over IEEE 802.15.4 Networks\". This document specifies an IPv6 header compression format for IPv6 packet delivery in Low Power Wireless Personal Area Networks (6LoWPANs). The compression format relies on shared context to allow compression of arbitrary prefixes. How the information is maintained in that shared context is out of scope. This document specifies compression of multicast addresses and a framework for compressing next headers. UDP header compression is specified within this framework. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.", "title": "" }, { "docid": "695e5694fd09292577552ad6eeb08713", "text": "For many robotics and intelligent vehicle applications, detection and tracking multiple objects (DATMO) is one of the most important components. However, most of the DATMO applications have difficulty in applying real-world applications due to high computational complexity. In this paper, we propose an efficient DATMO framework that fully employs the complementary information from the color camera and the 3D LIDAR. For high efficiency, we present a segmentation scheme by using both 2D and 3D information which gives accurate segments very quickly. In our experiments, we show that our framework can achieve the faster speed (~4Hz) than the state-of-the-art methods reported in KITTI benchmark (>1Hz).", "title": "" }, { "docid": "9d4c04d810e3c0f2211546c6da0e3f8d", "text": "In this paper, we propose to use deep policy networks which are trained with an advantage actor-critic method for statistically optimised dialogue systems. First, we show that, on summary state and action spaces, deep Reinforcement Learning (RL) outperforms Gaussian Processes methods. Summary state and action spaces lead to good performance but require pre-engineering effort, RL knowledge, and domain expertise. In order to remove the need to define such summary spaces, we show that deep RL can also be trained efficiently on the original state and action spaces. Dialogue systems based on partially observable Markov decision processes are known to require many dialogues to train, which makes them unappealing for practical deployment. We show that a deep RL method based on an actor-critic architecture can exploit a small amount of data very efficiently. Indeed, with only a few hundred dialogues collected with a handcrafted policy, the actorcritic deep learner is considerably bootstrapped from a combination of supervised and batch RL. In addition, convergence to an optimal policy is significantly sped up compared to other deep RL methods initialized on the data with batch RL. All experiments are performed on a restaurant domain derived from the Dialogue State Tracking Challenge 2 (DSTC2) dataset.", "title": "" }, { "docid": "0bbfd07d0686fc563f156d75d3672c7b", "text": "In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.", "title": "" }, { "docid": "66fc8b47dd186fa17240ee64aadf7ca7", "text": "Posterior reversible encephalopathy syndrome (PRES) is characterized by variable associations of seizure activity, consciousness impairment, headaches, visual abnormalities, nausea/vomiting, and focal neurological signs. The PRES may occur in diverse situations. The findings on neuroimaging in PRES are often symmetric and predominate edema in the white matter of the brain areas perfused by the posterior brain circulation, which is reversible when the underlying cause is treated. We report the case of PRES in normotensive patient with hyponatremia.", "title": "" }, { "docid": "3630c575bf7b5250930c7c54d8a1c6d0", "text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.", "title": "" }, { "docid": "17fb585ff12cff879febb32c2a16b739", "text": "An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.", "title": "" }, { "docid": "17cbead431425018818b649b1b69b527", "text": "In this letter, a flexible memory simulator - NVMain 2.0, is introduced to help the community for modeling not only commodity DRAMs but also emerging memory technologies, such as die-stacked DRAM caches, non-volatile memories (e.g., STT-RAM, PCRAM, and ReRAM) including multi-level cells (MLC), and hybrid non-volatile plus DRAM memory systems. Compared to existing memory simulators, NVMain 2.0 features a flexible user interface with compelling simulation speed and the capability of providing sub-array-level parallelism, fine-grained refresh, MLC and data encoder modeling, and distributed energy profiling.", "title": "" }, { "docid": "102ed611c0c32cfae33536706d5b3fbf", "text": "In this paper we model user behaviour in Twitter to capture the emergence of trending topics. For this purpose, we first extensively analyse tweet datasets of several different events. In particular, for these datasets, we construct and investigate the retweet graphs. We find that the retweet graph for a trending topic has a relatively dense largest connected component (LCC). Next, based on the insights obtained from the analyses of the datasets, we design a mathematical model that describes the evolution of a retweet graph by three main parameters. We then quantify, analytically and by simulation, the influence of the model parameters on the basic characteristics of the retweet graph, such as the density of edges and the size and density of the LCC. Finally, we put the model in practice, estimate its parameters and compare the resulting behavior of the model to our datasets.", "title": "" } ]
scidocsrr
820f8e69923176d4ecb5c1e6d2420932
Iot-based smart cities: A survey
[ { "docid": "86cb3c072e67bed8803892b72297812c", "text": "Internet of Things (IoT) will comprise billions of devices that can sense, communicate, compute and potentially actuate. Data streams coming from these devices will challenge the traditional approaches to data management and contribute to the emerging paradigm of big data. This paper discusses emerging Internet of Things (IoT) architecture, large scale sensor network applications, federating sensor networks, sensor data and related context capturing techniques, challenges in cloud-based management, storing, archiving and processing of", "title": "" }, { "docid": "f1e646a0627a5c61a0f73a41d35ccac7", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" }, { "docid": "cd891d5ecb9fa6bd8ae23e2a06151882", "text": "Smart City represents one of the most promising and prominent Internet of Things (IoT) applications. In the last few years, smart city concept has played an important role in academic and industry fields, with the development and deployment of various middleware platforms. However, this expansion has followed distinct approaches creating a fragmented scenario, in which different IoT ecosystems are not able to communicate between them. To fill this gap, there is a need to revisit the smart city IoT semantic and offer a global common approach. To this purpose, this paper browses the semantic annotation of the sensors in the cloud, and innovative services can be implemented and considered by bridging Clouds and IoT. Things-like semantic will be considered to perform the aggregation of heterogeneous resources by defining the Clouds of Things (CoT) paradigm. We survey the smart city vision, providing information on the main requirements and highlighting the benefits of integrating different IoT ecosystems within the cloud under this new CoT vision and discuss relevant challenges in this research area.", "title": "" } ]
[ { "docid": "59bb9a006844dcf7c5f1769a4b208744", "text": "3rd Generation Partnership Project (3GPP) has recently completed the specification of the Long Term Evolution (LTE) standard. Majority of the world’s operators and vendors are already committed to LTE deployments and developments, making LTE the market leader in the upcoming evolution to 4G wireless communication systems. Multiple input multiple output (MIMO) technologies introduced in LTE such as spatial multiplexing, transmit diversity, and beamforming are key components for providing higher peak rate at a better system efficiency, which are essential for supporting future broadband data service over wireless links. Further extension of LTE MIMO technologies is being studied under the 3GPP study item “LTE-Advanced” to meet the requirement of IMT-Advanced set by International Telecommunication Union Radiocommunication Sector (ITU-R). In this paper, we introduce various MIMO technologies employed in LTE and provide a brief overview on the MIMO technologies currently discussed in the LTE-Advanced forum.", "title": "" }, { "docid": "02c50512c053fb8df4537e125afea321", "text": "Online Social Networks (OSNs) have spread at stunning speed over the past decade. They are now a part of the lives of dozens of millions of people. The onset of OSNs has stretched the traditional notion of community to include groups of people who have never met in person but communicate with each other through OSNs to share knowledge, opinions, interests and activities. Here we explore in depth language independent gender classification. Our approach predicts gender using five colorbased features extracted from Twitter profiles such as the background color in a user’s profile page. This is in contrast with most existing methods for gender prediction that are language dependent. Those methods use high-dimensional spaces consisting of unique words extracted from such text fields as postings, user names, and profile descriptions. Our approach is independent of the user’s language, efficient, scalable, and computationally tractable, while attaining a good level of accuracy.", "title": "" }, { "docid": "14fac379b3d4fdfc0024883eba8431b3", "text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.", "title": "" }, { "docid": "3a882bf8553b8a0be05f1a6edbe01090", "text": "We present a new deep learning approach for matching deformable shapes by introducing Shape Deformation Networks which jointly encode 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt global feature vector that parameterizes the transformation of the template into the input surface. By predicting this feature for a new shape, we implicitly predict correspondences between this shape and the template. We show that these correspondences can be improved by an additional step which improves the shape feature by minimizing the Chamfer distance between the input and transformed template. We demonstrate that our simple approach improves on stateof-the-art results on the difficult FAUST-inter challenge, with an average correspondence error of 2.88cm. We show, on the TOSCA dataset, that our method is robust to many types of perturbations, and generalizes to non-human shapes. This robustness allows it to perform well on real unclean, meshes from the the SCAPE dataset.", "title": "" }, { "docid": "ac56eb533e3ae40b8300d4269fd2c08f", "text": "We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.", "title": "" }, { "docid": "28f1b7635b777cf278cc8d53a5afafb9", "text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.", "title": "" }, { "docid": "200225a36d89de88a23bccedb54485ef", "text": "This paper presents new software speed records for encryption and decryption using the block cipher AES-128 for different architectures. Target platforms are 8-bit AVR microcontrollers, NVIDIA graphics processing units (GPUs) and the Cell broadband engine. The new AVR implementation requires 124.6 and 181.3 cycles per byte for encryption and decryption with a code size of less than two kilobyte. Compared to the previous AVR records for encryption our code is 38 percent smaller and 1.24 times faster. The byte-sliced implementation for the synergistic processing elements of the Cell architecture achieves speed of 11.7 and 14.4 cycles per byte for encryption and decryption. Similarly, our fastest GPU implementation, running on the GTX 295 and handling many input streams in parallel, delivers throughputs of 0.17 and 0.19 cycles per byte for encryption and decryption respectively. Furthermore, this is the first AES implementation for the GPU which implements both encryption and decryption.", "title": "" }, { "docid": "90d236f6ae1ad2a1404d6e1b497d8b3a", "text": "In this paper, we propose a distributed and adaptive hybrid medium access control (DAH-MAC) scheme for a single-hop Internet of Things (IoT)-enabled mobile ad hoc network supporting voice and data services. A hybrid superframe structure is designed to accommodate packet transmissions from a varying number of mobile nodes generating either delay-sensitive voice traffic or best-effort data traffic. Within each superframe, voice nodes with packets to transmit access the channel in a contention-free period (CFP) using distributed time division multiple access, while data nodes contend for channel access in a contention period (CP) using truncated carrier sense multiple access with collision avoidance. In the CFP, by adaptively allocating time slots according to instantaneous voice traffic load, the MAC exploits voice traffic multiplexing to increase the voice capacity. In the CP, a throughput optimization framework is proposed for the DAH-MAC, which maximizes the aggregate data throughput by adjusting the optimal contention window size according to voice and data traffic load variations. Numerical results show that the proposed MAC scheme outperforms existing quality-of-service-aware MAC schemes for voice and data traffic in the presence of heterogeneous traffic load dynamics.", "title": "" }, { "docid": "cc93fe4b851e3d7f3dcdcd2a54af6660", "text": "Positioning is a key task in most field robotics applications but can be very challenging in GPS-denied or high-slip environments. A common tactic in such cases is to position visually, and we present a visual odometry implementation with the unusual reliance on optical mouse sensors to report vehicle velocity. Using multiple kilometers of data from a lunar rover prototype, we demonstrate that, in conjunction with a moderate-grade inertial measurement unit, such a sensor can provide an integrated pose stream that is at times more accurate than that achievable by wheel odometry and visibly more desirable for perception purposes than that provided by a high-end GPS-INS system. A discussion of the sensor’s limitations and several drift mitigating strategies attempted are presented.", "title": "" }, { "docid": "9e0ded0d1f913dce7d0ea6aab115678c", "text": "DevOps is changing the way organizations develop and deploy applications and service customers. Many organizations want to apply DevOps, but they are concerned by the security aspects of the produced software. This has triggered the creation of the terms SecDevOps and DevSecOps. These terms refer to incorporating security practices in a DevOps environment by promoting the collaboration between the development teams, the operations teams, and the security teams. This paper surveys the literature from academia and industry to identify the main aspects of this trend. The main aspects that we found are: definition, security best practices, compliance, process automation, tools for SecDevOps, software configuration, team collaboration, availability of activity data and information secrecy. Although the number of relevant publications is low, we believe that the terms are not buzzwords, they imply important challenges that the security and software communities shall address to help organizations develop secure software while applying DevOps processes.", "title": "" }, { "docid": "2ebf4b32598ba3cd74513f1bab8fe447", "text": "Anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis is an autoimmune disorder of the central nervous system (CNS). Its immunopathogenesis has been proposed to include early cerebrospinal fluid (CSF) lymphocytosis, subsequent CNS disease restriction and B cell mechanism predominance. There are limited data regarding T cell involvement in the disease. To contribute to the current knowledge, we investigated the complex system of chemokines and cytokines related to B and T cell functions in CSF and sera samples from anti-NMDAR encephalitis patients at different time-points of the disease. One patient in our study group had a long-persisting coma and underwent extraordinary immunosuppressive therapy. Twenty-seven paired CSF/serum samples were collected from nine patients during the follow-up period (median 12 months, range 1–26 months). The patient samples were stratified into three periods after the onset of the first disease symptom and compared with the controls. Modified Rankin score (mRS) defined the clinical status. The concentrations of the chemokines (C-X-C motif ligand (CXCL)10, CXCL8 and C-C motif ligand 2 (CCL2)) and the cytokines (interferon (IFN)γ, interleukin (IL)4, IL7, IL15, IL17A and tumour necrosis factor (TNF)α) were measured with Luminex multiple bead technology. The B cell-activating factor (BAFF) and CXCL13 concentrations were determined via enzyme-linked immunosorbent assay. We correlated the disease period with the mRS, pleocytosis and the levels of all of the investigated chemokines and cytokines. Non-parametric tests were used, a P value <0.05 was considered to be significant. The increased CXCL10 and CXCL13 CSF levels accompanied early-stage disease progression and pleocytosis. The CSF CXCL10 and CXCL13 levels were the highest in the most complicated patient. The CSF BAFF levels remained unchanged through the periods. In contrast, the CSF levels of T cell-related cytokines (INFγ, TNFα and IL17A) and IL15 were slightly increased at all of the periods examined. No dynamic changes in chemokine and cytokine levels were observed in the peripheral blood. Our data support the hypothesis that anti-NMDAR encephalitis is restricted to the CNS and that chemoattraction of immune cells dominates at its early stage. Furthermore, our findings raise the question of whether T cells are involved in this disease.", "title": "" }, { "docid": "8245472f3dad1dce2f81e21b53af5793", "text": "Butanol is an aliphatic saturated alcohol having the molecular formula of C(4)H(9)OH. Butanol can be used as an intermediate in chemical synthesis and as a solvent for a wide variety of chemical and textile industry applications. Moreover, butanol has been considered as a potential fuel or fuel additive. Biological production of butanol (with acetone and ethanol) was one of the largest industrial fermentation processes early in the 20th century. However, fermentative production of butanol had lost its competitiveness by 1960s due to increasing substrate costs and the advent of more efficient petrochemical processes. Recently, increasing demand for the use of renewable resources as feedstock for the production of chemicals combined with advances in biotechnology through omics, systems biology, metabolic engineering and innovative process developments is generating a renewed interest in fermentative butanol production. This article reviews biotechnological production of butanol by clostridia and some relevant fermentation and downstream processes. The strategies for strain improvement by metabolic engineering and further requirements to make fermentative butanol production a successful industrial process are also discussed.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "bf4c0356b53f13fc2327dcf7c3377a8f", "text": "This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.", "title": "" }, { "docid": "adc84153f83ad1587a4218d817befe8d", "text": "Improving the sluggish kinetics for the electrochemical reduction of water to molecular hydrogen in alkaline environments is one key to reducing the high overpotentials and associated energy losses in water-alkali and chlor-alkali electrolyzers. We found that a controlled arrangement of nanometer-scale Ni(OH)(2) clusters on platinum electrode surfaces manifests a factor of 8 activity increase in catalyzing the hydrogen evolution reaction relative to state-of-the-art metal and metal-oxide catalysts. In a bifunctional effect, the edges of the Ni(OH)(2) clusters promoted the dissociation of water and the production of hydrogen intermediates that then adsorbed on the nearby Pt surfaces and recombined into molecular hydrogen. The generation of these hydrogen intermediates could be further enhanced via Li(+)-induced destabilization of the HO-H bond, resulting in a factor of 10 total increase in activity.", "title": "" }, { "docid": "64e26b00bba3bba8d2ab77b44f049c58", "text": "The transmission properties of a folded corrugated substrate integrated waveguide (FCSIW) and a proposed half-mode FCSIW is investigated. For the same cut-off frequency, these structures have similar performance to CSIW and HMCSIW respectively, but with significantly reduced width. The top wall is isolated from the bottom wall at DC thereby permitting active devices to be connected directly to, and biased through them. Arrays of quarter-wave stubs above the top wall allow TE1,0 mode conduction currents to flow between the top and side walls. Measurements and simulations of waveguides designed to have a nominal cut-off frequency of 3 GHz demonstrate the feasibility of these compact waveguides.", "title": "" }, { "docid": "87ce9a23040f809d1af2f7d032be2b41", "text": "BACKGROUND\nThe majority of middle-aged to older patients with chronic conditions report forgetting to take medications as prescribed. The promotion of patients' smartphone medication reminder app (SMRA) use shows promise as a feasible and cost-effective way to support their medication adherence. Providing training on SMRA use, guided by the technology acceptance model (TAM), could be a promising intervention to promote patients' app use.\n\n\nOBJECTIVE\nThe aim of this pilot study was to (1) assess the feasibility of an SMRA training session designed to increase patients' intention to use the app through targeting perceived usefulness of app, perceived ease of app use, and positive subjective norm regarding app use and (2) understand the ways to improve the design and implementation of the training session in a hospital setting.\n\n\nMETHODS\nA two-group design was employed. A total of 11 patients older than 40 years (median=58, SD=9.55) and taking 3 or more prescribed medications took part in the study on one of two different dates as participants in either the training group (n=5) or nontraining group (n=6). The training group received an approximately 2-hour intervention training session designed to target TAM variables regarding one popular SMRA, the Medisafe app. The nontraining group received an approximately 2-hour control training session where the participants individually explored Medisafe app features. Each training session was concluded with a one-time survey and a one-time focus group.\n\n\nRESULTS\nMann-Whitney U tests revealed that the level of perceived ease of use (P=.13) and the level of intention to use an SMRA (P=.33) were higher in the training group (median=7.00, median=6.67, respectively) than in the nontraining group (median=6.25, median=5.83). However, the level of perceived usefulness (U=4.50, Z=-1.99, P=.05) and the level of positive subjective norm (P=.25) were lower in the training group (median=6.50, median=4.29) than in the nontraining group (median=6.92, median=4.50). Focus groups revealed the following participants' perceptions of SMRA use in the real-world setting that the intervention training session would need to emphasize in targeting perceived usefulness and positive subjective norm: (1) the participants would find an SMRA to be useful if they thought the app could help address specific struggles in medication adherence in their lives and (2) the participants think that their family members (or health care providers) might view positively the participants' SMRA use in primary care settings (or during routine medical checkups).\n\n\nCONCLUSIONS\nIntervention training session, guided by TAM, appeared feasible in targeting patients' perceived ease of use and, thereby, increasing intention to use an SMRA. Emphasizing the real-world utility of SMRA, the training session could better target patients' perceived usefulness and positive subjective norm that are also important in increasing their intention to use the app.", "title": "" }, { "docid": "ab05c141b9d334f488cfb08ad9ed2137", "text": "Cellular communications are undergoing significant evolutions in order to accommodate the load generated by increasingly pervasive smart mobile devices. Dynamic access network adaptation to customers' demands is one of the most promising paths taken by network operators. To that end, one must be able to process large amount of mobile traffic data and outline the network utilization in an automated manner. In this paper, we propose a framework to analyze broad sets of Call Detail Records (CDRs) so as to define categories of mobile call profiles and classify network usages accordingly. We evaluate our framework on a CDR dataset including more than 300 million calls recorded in an urban area over 5 months. We show how our approach allows to classify similar network usage profiles and to tell apart normal and outlying call behaviors.", "title": "" }, { "docid": "7ebd355d65c8de8607da0363e8c86151", "text": "In this letter, we compare the scanning beams of two leaky-wave antennas (LWAs), respectively, loaded with capacitive and inductive radiation elements, which have not been fully discussed in previous publications. It is pointed out that an LWA with only one type of radiation element suffers from a significant gain fluctuation over its beam-scanning band. To remedy this problem, we propose an LWA alternately loaded with inductive and capacitive elements along the host transmission line. The proposed LWA is able to steer its beam continuously from backward to forward with constant gain. A microstrip-based LWA is designed on the basis of the proposed method, and the measurement of its fabricated prototype demonstrates and confirms the desired results. This design method can widely be used to obtain LWAs with constant gain based on a variety of TLs.", "title": "" }, { "docid": "a74871212b708baea289ee42665c8adf", "text": "Current data mining techniques used to create failure predictors for online services require massive amounts of data to build, train, and test the predictors. These operations are tedious, time consuming, and are not done in real-time. Also, the accuracy of the resulting predictor is highly compromised by changes that affect the environment and working conditions of the predictor. We propose a new approach to creating a dynamic failure predictor for online services in real-time and keeping its accuracy high during the services run-time changes. We use synthetic transactions during the run-time lifecycle to generate current data about the service. This data is used in its ephemeral state to build, train, test, and maintain an up-to-date failure predictor. We implemented the proposed approach in a large-scale online ad service that processes billions of requests each month in six data centers distributed in three continents. We show that the proposed predictor is able to maintain failure prediction accuracy as high as 86% during online service changes, whereas the accuracy of the state-of-the-art predictors may drop to less than 10%.", "title": "" } ]
scidocsrr
6576d351efa9dbcd3a5f6b38a24f65c8
Sensor Cloud: A Cloud of Virtual Sensors
[ { "docid": "fa3c52e9b3c4a361fd869977ba61c7bf", "text": "The combination of the Internet and emerging technologies such as nearfield communications, real-time localization, and embedded sensors lets us transform everyday objects into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of Things and enable novel computing applications. As a step toward design and architectural principles for smart objects, the authors introduce a hierarchy of architectures with increasing levels of real-world awareness and interactivity. In particular, they describe activity-, policy-, and process-aware smart objects and demonstrate how the respective architectural abstractions support increasingly complex application.", "title": "" } ]
[ { "docid": "e737c117cd6e7083cd50069b70d236cb", "text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.", "title": "" }, { "docid": "68d6d818596518114dc829bb9ecc570f", "text": "Learning analytics is a significant area of technology-enhanced learning that has emerged during the last decade. This review of the field begins with an examination of the technological, educational and political factors that have driven the development of analytics in educational settings. It goes on to chart the emergence of learning analytics, including their origins in the 20th century, the development of data-driven analytics, the rise of learningfocused perspectives and the influence of national economic concerns. It next focuses on the relationships between learning analytics, educational data mining and academic analytics. Finally, it examines developing areas of learning analytics research, and identifies a series of future challenges.", "title": "" }, { "docid": "7ea1ad3f27cb76dc6fd0e4e0dd48b09e", "text": "This paper presents results on the modeling and control for an Unmanned Aerial Vehicle (UAV) kind quadrotor transporting a cable-suspended payload. The mathematical model is based on Euler-Lagrange formulation, where the integrated dynamics of the quadrotor, cable and payload are considered. An Interconnection and Damping Assignment Passivity - Based Control (IDA-PBC) for a quadrotor UAV transporting a cable-suspended payload is designed. The control objective is to transport the payload from point to point transfer with swing suppression along trajectory. The cable is considered rigid. Numerical simulations are carried out to validate the overall control approach.", "title": "" }, { "docid": "d9c514f3e1089f258732eef4a949fe55", "text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.", "title": "" }, { "docid": "56ec3abe17259cae868e17dc2163fc0e", "text": "This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.", "title": "" }, { "docid": "c7f0a749e38b3b7eba871fca80df9464", "text": "This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which has proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is the first of its kind covering Classical Arabic text, and could be used for interesting applications for Modern Standard Arabic as well. This corpus will enable researchers to obtain empirical patterns and rules to build new anaphora resolution approaches. Also, this corpus can be used to train, optimize and evaluate existing approaches.", "title": "" }, { "docid": "2f566d97cf0949ae54276525b805239e", "text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and", "title": "" }, { "docid": "bfcd6adc2df1cb6260696f9aeb4d4ea6", "text": "The microtubule-dependent GEF-H1 pathway controls synaptic re-networking and overall gene expression via regulating cytoskeleton dynamics. Understanding this pathway after ischemia is essential to developing new therapies for neuronal function recovery. However, how the GEF-H1 pathway is regulated following transient cerebral ischemia remains unknown. This study employed a rat model of transient forebrain ischemia to investigate alterations of the GEF-H1 pathway using Western blotting, confocal and electron microscopy, dephosphorylation analysis, and pull-down assay. The GEF-H1 activity was significantly upregulated by: (i) dephosphorylation and (ii) translocation to synaptic membrane and nuclear structures during the early phase of reperfusion. GEF-H1 protein was then downregulated in the brain regions where neurons were destined to undergo delayed neuronal death, but markedly upregulated in neurons that were resistant to the same episode of cerebral ischemia. Consistently, GTP-RhoA, a GEF-H1 substrate, was significantly upregulated after brain ischemia. Electron microscopy further showed that neuronal microtubules were persistently depolymerized in the brain region where GEF-H1 protein was downregulated after brain ischemia. The results demonstrate that the GEF-H1 activity is significantly upregulated in both vulnerable and resistant brain regions in the early phase of reperfusion. However, GEF-H1 protein is downregulated in the vulnerable neurons but upregulated in the ischemic resistant neurons during the recovery phase after ischemia. The initial upregulation of GEF-H1 activity may contribute to excitotoxicity, whereas the late upregulation of GEF-H1 protein may promote neuroplasticity after brain ischemia.", "title": "" }, { "docid": "39edb0849fcecb5261c51a071f19acfa", "text": "In 1899, Galton first captured ink-on-paper fingerprints of a single child from birth until the age of 4.5 years, manually compared the prints, and concluded that “the print of a child at the age of 2.5 years would serve to identify him ever after.” Since then, ink-on-paper fingerprinting and manual comparison methods have been superseded by digital capture and automatic fingerprint comparison techniques, but only a few feasibility studies on child fingerprint recognition have been conducted. Here, we present the first systematic and rigorous longitudinal study that addresses the following questions: 1) Do fingerprints of young children possess the salient features required to uniquely recognize a child? 2) If so, at what age can a child’s fingerprints be captured with sufficient fidelity for recognition? 3) Can a child’s fingerprints be used to reliably recognize the child as he ages? For this paper, we collected fingerprints of 309 children (0–5 years old) four different times over a one year period. We show, for the first time, that fingerprints acquired from a child as young as 6-h old exhibit distinguishing features necessary for recognition, and that state-of-the-art fingerprint technology achieves high recognition accuracy (98.9% true accept rate at 0.1% false accept rate) for children older than six months. In addition, we use mixed-effects statistical models to study the persistence of child fingerprint recognition accuracy and show that the recognition accuracy is not significantly affected over the one year time lapse in our data. Given rapidly growing requirements to recognize children for vaccination tracking, delivery of supplementary food, and national identification documents, this paper demonstrates that fingerprint recognition of young children (six months and older) is a viable solution based on available capture and recognition technology.", "title": "" }, { "docid": "0907539385c59f9bd476b2d1fb723a38", "text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.", "title": "" }, { "docid": "d5c57af0f7ab41921ddb92a5de31c33a", "text": "This paper investigates how to blindly evaluate the visual quality of an image by learning rules from linguistic descriptions. Extensive psychological evidence shows that humans prefer to conduct evaluations qualitatively rather than numerically. The qualitative evaluations are then converted into the numerical scores to fairly benchmark objective image quality assessment (IQA) metrics. Recently, lots of learning-based IQA models are proposed by analyzing the mapping from the images to numerical ratings. However, the learnt mapping can hardly be accurate enough because some information has been lost in such an irreversible conversion from the linguistic descriptions to numerical scores. In this paper, we propose a blind IQA model, which learns qualitative evaluations directly and outputs numerical scores for general utilization and fair comparison. Images are represented by natural scene statistics features. A discriminative deep model is trained to classify the features into five grades, corresponding to five explicit mental concepts, i.e., excellent, good, fair, poor, and bad. A newly designed quality pooling is then applied to convert the qualitative labels into scores. The classification framework is not only much more natural than the regression-based models, but also robust to the small sample size problem. Thorough experiments are conducted on popular databases to verify the model's effectiveness, efficiency, and robustness.", "title": "" }, { "docid": "c89b903e497ebe8e8d89e8d1d931fae1", "text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4f537c9e63bbd967e52f22124afa4480", "text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.", "title": "" }, { "docid": "ce15521ba1e67b111f685f1c0b23a638", "text": "In this paper, we try to leverage a large-scale and multilingual knowledge base, Wikipedia, to help effectively analyze and organize Web information written in different languages. Based on the observation that one Wikipedia concept may be described by articles in different languages, we adapt existing topic modeling algorithm for mining multilingual topics from this knowledge base. The extracted 'universal' topics have multiple types of representations, with each type corresponding to one language. Accordingly, new documents of different languages can be represented in a space using a group of universal topics, which makes various multilingual Web applications feasible.", "title": "" }, { "docid": "dc84e401709509638a1a9e24d7db53e1", "text": "AIM AND OBJECTIVES\nExocrine pancreatic insufficiency caused by inflammation or pancreatic tumors results in nutrient malfunction by a lack of digestive enzymes and neutralization compounds. Despite satisfactory clinical results with current enzyme therapies, a normalization of fat absorption in patients is rare. An individualized therapy is required that includes high dosage of enzymatic units, usage of enteric coating, and addition of gastric proton pump inhibitors. The key goal to improve this therapy is to identify digestive enzymes with high activity and stability in the gastrointestinal tract.\n\n\nMETHODS\nWe cloned and analyzed three novel ciliate lipases derived from Tetrahymena thermophila. Using highly precise pH-STAT-titration and colorimetric methods, we determined stability and lipolytic activity under physiological conditions in comparison with commercially available porcine and fungal digestive enzyme preparations. We measured from pH 2.0 to 9.0, with different bile salts concentrations, and substrates such as olive oil and fat derived from pig diet.\n\n\nRESULTS\nCiliate lipases CL-120, CL-130, and CL-230 showed activities up to 220-fold higher than Creon, pancreatin standard, and rizolipase Nortase within a pH range from pH 2.0 to 9.0. They are highly active in the presence of bile salts and complex pig diet substrate, and more stable after incubation in human gastric juice compared with porcine pancreatic lipase and rizolipase.\n\n\nCONCLUSIONS\nThe newly cloned and characterized lipases fulfilled all requirements for high activity under physiological conditions. These novel enzymes are therefore promising candidates for an improved enzyme replacement therapy for exocrine pancreatic insufficiency.", "title": "" }, { "docid": "dba804ec55201a683e8f4d82dbd15b6a", "text": "We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.", "title": "" }, { "docid": "de73980005a62a24820ed199fab082a3", "text": "Natural language interfaces offer end-users a familiar and convenient option for querying ontology-based knowledge bases. Several studies have shown that they can achieve high retrieval performance as well as domain independence. This paper focuses on usability and investigates if NLIs are useful from an end-user’s point of view. To that end, we introduce four interfaces each allowing a different query language and present a usability study benchmarking these interfaces. The results of the study reveal a clear preference for full sentences as query language and confirm that NLIs are useful for querying Semantic Web data.", "title": "" }, { "docid": "43269c32b765b0f5d5d0772e0b1c5906", "text": "Silver nanoparticles (AgNPs) have been synthesized by Lantana camara leaf extract through simple green route and evaluated their antibacterial and catalytic activities. The leaf extract (LE) itself acts as both reducing and stabilizing agent at once for desired nanoparticle synthesis. The colorless reaction mixture turns to yellowish brown attesting the AgNPs formation and displayed UV-Vis absorption spectra. Structural analysis confirms the crystalline nature and formation of fcc structured metallic silver with majority (111) facets. Morphological studies elicit the formation of almost spherical shaped nanoparticles and as AgNO3 concentration is increased, there is an increment in the particle size. The FTIR analysis evidences the presence of various functional groups of biomolecules of LE is responsible for stabilization of AgNPs. Zeta potential measurement attests the higher stability of synthesized AgNPs. The synthesized AgNPs exhibited good antibacterial activity when tested against Escherichia coli, Pseudomonas spp., Bacillus spp. and Staphylococcus spp. using standard Kirby-Bauer disc diffusion assay. Furthermore, they showed good catalytic activity on the reduction of methylene blue by L. camara extract which is monitored and confirmed by the UV-Vis spectrophotometer.", "title": "" }, { "docid": "76976c3c640f33b546999b6136150636", "text": "Investigations that require the exploitation of large volumes of face imagery are increasingly common in current forensic scenarios (e.g., Boston Marathon bombing), but effective solutions for triaging such imagery (i.e., low importance, moderate importance, and of critical interest) are not available in the literature. General issues for investigators in these scenarios are a lack of systems that can scale to volumes of images of the order of a few million, and a lack of established methods for clustering the face images into the unknown number of persons of interest contained in the collection. As such, we explore best practices for clustering large sets of face images (up to 1 million here) into large numbers of clusters (approximately 200 thousand) as a method of reducing the volume of data to be investigated by forensic analysts. Our analysis involves a performance comparison of several clustering algorithms in terms of the accuracy of grouping face images by identity, run-time, and efficiency in representing large datasets of face images in terms of compact and isolated clusters. For two different face datasets, a mugshot database (PCSO) and the well known unconstrained dataset, LFW, we find the rank-order clustering method to be effective in clustering accuracy, and relatively efficient in terms of run-time.", "title": "" } ]
scidocsrr
c07a68b567778d8078092945d68bc154
Crowdfunding : An Industrial Organization Perspective ∗
[ { "docid": "540a6dd82c7764eedf99608359776e66", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" } ]
[ { "docid": "21b04c71f6c87b18f544f6b3f6570dd7", "text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>", "title": "" }, { "docid": "3c4a8623330c48558ca178a82b68f06c", "text": "Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.", "title": "" }, { "docid": "7d53fcce145badeeaeff55b5299010b9", "text": "Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.", "title": "" }, { "docid": "961cc1dc7063706f8f66fc136da41661", "text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.", "title": "" }, { "docid": "c797e42772802ee9924a970593e5c81e", "text": "Information systems have been widely adopted to support service processes in various domains, e.g., in the telecommunication, finance, and health sectors. Recently, work on process mining showed how management of these processes, and engineering of supporting systems, can be guided by models extracted from the event logs that are recorded during process operation. In this work, we establish a queueing perspective in operational process mining. We propose to consider queues as first-class citizens and use queueing theory as a basis for queue mining techniques. To demonstrate the value of queue mining, we revisit the specific operational problem of online delay prediction: using event data, we show that queue mining yields accurate online predictions of case delay.", "title": "" }, { "docid": "6965b52a011bc47eb302d7602dd8bcba", "text": "We have developed a simple and expandable procedure for classification and validation of extracellular data based on a probabilistic model of data generation. This approach relies on an empirical characterization of the recording noise. We first use this noise characterization to optimize the clustering of recorded events into putative neurons. As a second step, we use the noise model again to assess the quality of each cluster by comparing the within-cluster variability to that of the noise. This second step can be performed independently of the clustering algorithm used, and it provides the user with quantitative as well as visual tests of the quality of the classification.", "title": "" }, { "docid": "899e3e436cdaed9efb66b7c9c296ea90", "text": "Background estimation and foreground segmentation are important steps in many high-level vision tasks. Many existing methods estimate background as a low-rank component and foreground as a sparse matrix without incorporating the structural information. Therefore, these algorithms exhibit degraded performance in the presence of dynamic backgrounds, photometric variations, jitter, shadows, and large occlusions. We observe that these backgrounds often span multiple manifolds. Therefore, constraints that ensure continuity on those manifolds will result in better background estimation. Hence, we propose to incorporate the spatial and temporal sparse subspace clustering into the robust principal component analysis (RPCA) framework. To that end, we compute a spatial and temporal graph for a given sequence using motion-aware correlation coefficient. The information captured by both graphs is utilized by estimating the proximity matrices using both the normalized Euclidean and geodesic distances. The low-rank component must be able to efficiently partition the spatiotemporal graphs using these Laplacian matrices. Embedded with the RPCA objective function, these Laplacian matrices constrain the background model to be spatially and temporally consistent, both on linear and nonlinear manifolds. The solution of the proposed objective function is computed by using the linearized alternating direction method with adaptive penalty optimization scheme. Experiments are performed on challenging sequences from five publicly available datasets and are compared with the 23 existing state-of-the-art methods. The results demonstrate excellent performance of the proposed algorithm for both the background estimation and foreground segmentation.", "title": "" }, { "docid": "edc97560247ca1a6270c957de44217c4", "text": "Fuzzing is a well-known black-box approach to the security testing of applications. Fuzzing has many advantages in terms of simplicity and effectiveness over more complex, expensive testing approaches. Unfortunately, current fuzzing tools suffer from a number of limitations, and, in particular, they provide little support for the fuzzing of stateful protocols. In this paper, we present SNOOZE, a tool for building flexible, securityoriented, network protocol fuzzers. SNOOZE implements a stateful fuzzing approach that can be used to effectively identify security flaws in network protocol implementations. SNOOZE allows a tester to describe the stateful operation of a protocol and the messages that need to be generated in each state. In addition, SNOOZEprovides attack-specific fuzzing primitives that allow a tester to focus on specific vulnerability classes. We used an initial prototype of the SNOOZE tool to test programs that implement the SIP protocol, with promising results. SNOOZE supported the creation of sophisticated fuzzing scenarios that were able to expose realworld bugs in the programs analyzed.", "title": "" }, { "docid": "f86eea3192fe3dd8548cec52e53553e0", "text": "Acromioclavicular (AC) joint separations are common injuries of the shoulder girdle, especially in the young and active population. Typically the mechanism of this injury is a direct force against the lateral aspect of the adducted shoulder, the magnitude of which affects injury severity. While low-grade injuries are frequently managed successfully using non-surgical measures, high-grade injuries frequently warrant surgical intervention to minimize pain and maximize shoulder function. Factors such as duration of injury and activity level should also be taken into account in an effort to individualize each patient's treatment. A number of surgical techniques have been introduced to manage symptomatic, high-grade injuries. The purpose of this article is to review the important anatomy, biomechanical background, and clinical management of this entity.", "title": "" }, { "docid": "0f325e4fe9faf6c43a68ea2721b85f58", "text": "Prosopis juliflora is characterized by distinct and profuse growth even in nutritionally poor soil and environmentally stressed conditions and is believed to harbor some novel heavy metal-resistant bacteria in the rhizosphere and endosphere. This study was performed to isolate and characterize Cr-resistant bacteria from the rhizosphere and endosphere of P. juliflora growing on the tannery effluent contaminated soil. A total of 5 and 21 bacterial strains were isolated from the rhizosphere and endosphere, respectively, and were shown to tolerate Cr up to 3000 mg l(-1). These isolates also exhibited tolerance to other toxic heavy metals such as, Cd, Cu, Pb, and Zn, and high concentration (174 g l(-1)) of NaCl. Moreover, most of the isolated bacterial strains showed one or more plant growth-promoting activities. The phylogenetic analysis of the 16S rRNA gene showed that the predominant species included Bacillus, Staphylococcus and Aerococcus. As far as we know, this is the first report analyzing rhizo- and endophytic bacterial communities associated with P. juliflora growing on the tannery effluent contaminated soil. The inoculation of three isolates to ryegrass (Lolium multiflorum L.) improved plant growth and heavy metal removal from the tannery effluent contaminated soil suggesting that these bacteria could enhance the establishment of the plant in contaminated soil and also improve the efficiency of phytoremediation of heavy metal-degraded soils.", "title": "" }, { "docid": "cbdb038d8217ec2e0c4174519d6f2012", "text": "Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature. Recently, a new promising metric called Word Mover’s Distance was proposed to measure the divergence between text passages. In this paper, we demonstrate that this metric can be extended to incorporate term-weighting schemes and provide more accurate and computationally efficient matching between documents using entropic regularization. We evaluate the benefits of both extensions in the task of cross-lingual document retrieval (CLDR). Our experimental results on eight CLDR problems suggest that the proposed methods achieve remarkable improvements in terms of Mean Reciprocal Rank compared to several baselines.", "title": "" }, { "docid": "2b74640b9f95e1004ffa10979946a4e6", "text": "A generic framework for the automated classification of human movements using an accelerometry monitoring system is introduced. The framework was structured around a binary decision tree in which movements were divided into classes and subclasses at different hierarchical levels. General distinctions between movements were applied in the top levels, and successively more detailed subclassifications were made in the lower levels of the tree. The structure was modular and flexible: parts of the tree could be reordered, pruned or extended, without the remainder of the tree being affected. This framework was used to develop a classifier to identify basic movements from the signals obtained from a single, waist-mounted triaxial accelerometer. The movements were first divided into activity and rest. The activities were classified as falls, walking, transition between postural orientations, or other movement. The postural orientations during rest were classified as sitting, standing or lying. In controlled laboratory studies in which 26 normal, healthy subjects carried out a set of basic movements, the sensitivity of every classification exceeded 87%, and the specificity exceeded 94%; the overall accuracy of the system, measured as the number of correct classifications across all levels of the hierarchy, was a sensitivity of 97.7% and a specificity of 98.7% over a data set of 1309 movements.", "title": "" }, { "docid": "8be72e103853aeac601aa65b61b98fd2", "text": "Opinion surveys usually employ multiple items to measure the respondent’s underlying value, belief, or attitude. To analyze such types of data, researchers have often followed a two-step approach by first constructing a composite measure and then using it in subsequent analysis. This paper presents a class of hierarchical item response models that help integrate measurement and analysis. In this approach, individual responses to multiple items stem from a latent preference, of which both the mean and variance may depend on observed covariates. Compared with the two-step approach, the hierarchical approach reduces bias, increases efficiency, and facilitates direct comparison across surveys covering different sets of items. Moreover, it enables us to investigate not only how preferences differ among groups, vary across regions, and evolve over time, but also levels, patterns, and trends of attitude polarization and ideological constraint. An open-source R package, hIRT, is available for fitting the proposed models. ∗Direct all correspondence to Xiang Zhou, Department of Government, Harvard University, 1737 Cambridge Street, Cambridge, MA 02138, USA; email: xiang [email protected]. The author thanks Kenneth Bollen, Bryce Corrigan, Ryan Enos, Max Goplerud, Gary King, Jonathan Kropko, Horacio Larreguy, Jie Lv, Christoph Mikulaschek, Barum Park, Pia Raffler, Yunkyu Sohn, Yu-Sung Su, Dustin Tingley, Yuhua Wang, Yu Xie, and Kazuo Yamaguchi for helpful comments on previous versions of this work.", "title": "" }, { "docid": "8717ccb9a12b4532aca5a747a3aeeeb2", "text": "The diaphragm is the primary muscle involved in active inspiration and serves also as an important anatomical landmark that separates the thoracic and abdominal cavity. However, the diaphragm muscle like other structures and organs in the human body has more than one function, and displays many anatomic links throughout the body, thereby forming a 'network of breathing'. Besides respiratory function, it is important for postural control as it stabilises the lumbar spine during loading tasks. It also plays a vital role in the vascular and lymphatic systems, as well as, is greatly involved in gastroesophageal functions such as swallowing, vomiting, and contributing to the gastroesophageal reflux barrier. In this paper we set out in detail the anatomy and embryology of the diaphragm and attempt to show it serves as both: an important exchange point of information, originating in different areas of the body, and a source of information in itself. The study also discusses all of its functions related to breathing.", "title": "" }, { "docid": "41eec7ed2d93fb415dfd197933975028", "text": "Open Information Extraction (OIE) is a recent unsupervised strategy to extract great amounts of basic propositions (verb-based triples) from massive text corpora which scales to Web-size document collections. We propose a multilingual rule-based OIE method that takes as input dependency parses in the CoNLL-X format, identifies argument structures within the dependency parses, and extracts a set of basic propositions from each argument structure. Our method requires no training data and, according to experimental studies, obtains higher recall and higher precision than existing approaches relying on training data. Experiments were performed in three languages: English, Portuguese, and Spanish.", "title": "" }, { "docid": "7143c97b6ea484566f521e36a3fa834e", "text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.", "title": "" }, { "docid": "d5233cdbe0044f2296be6136f459edcf", "text": "Road detection is one of the key issues of scene understanding for Advanced Driving Assistance Systems (ADAS). Recent approaches has addressed this issue through the use of different kinds of sensors, features and algorithms. KITTI-ROAD benchmark has provided an open-access dataset and standard evaluation mean for road area detection. In this paper, we propose an improved road detection algorithm that provides a pixel-level confidence map. The proposed approach is inspired from our former work based on road feature extraction using illuminant intrinsic image and plane extraction from v-disparity map segmentation. In the former research, detection results of road area are represented by binary map. The novelty of this improved algorithm is to introduce likelihood theory to build a confidence map of road detection. Such a strategy copes better with ambiguous environments, compared to a simple binary map. Evaluations and comparisons of both, binary map and confidence map, have been done using the KITTI-ROAD benchmark.", "title": "" }, { "docid": "bcf69b1d42d28b8ba66b133ad6421cc4", "text": "This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1", "title": "" } ]
scidocsrr
ed78a3c5e53296840b1447cdde40fd47
Smart City Services over a Future Internet Platform Based on Internet of Things and Cloud : The Smart Parking Case
[ { "docid": "0521f79f13cdbe05867b5db733feac16", "text": "This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement.", "title": "" } ]
[ { "docid": "fc90741cd456d23335407e095a14e88a", "text": "Mobility of a hexarotor UAV in its standard configuration is limited, since all the propeller force vectors are parallel and they achieve only 4-DoF actuation, similar, e.g., to quadrotors. As a consequence, the hexarotor pose cannot track an arbitrary trajectory while the center of mass is tracking a position trajectory. In this paper, we consider a different hexarotor architecture where propellers are tilted, without the need of any additional hardware. In this way, the hexarotor gains a 6-DoF actuation which allows to independently reach positions and orientations in free space and to be able to exert forces on the environment to resist any wrench for aerial manipulation tasks. After deriving the dynamical model of the proposed hexarotor, we discuss the controllability and the tilt angle optimization to reduce the control effort for the specific task. An exact feedback linearization and decoupling control law is proposed based on the input-output mapping, considering the Jacobian and task acceleration, for non-linear trajectory tracking. The capabilities of our approach are shown by simulation results.", "title": "" }, { "docid": "c19d408eeed287d2e6f83fd98460966c", "text": "The statistical modelling of language, together with advances in wide-coverage grammar development, have led to high levels of robustness and efficiency in NLP systems and made linguistically motivated large-scale language processing a possibility (Matsuzaki et al., 2007; Kaplan et al., 2004). This paper describes an NLP system which is based on syntactic and semantic formalisms from theoretical linguistics, and which we have used to analyse the entire Gigaword corpus (1 billion words) in less than 5 days using only 18 processors. This combination of detail and speed of analysis represents a breakthrough in NLP technology. The system is built around a wide-coverage Combinatory Categorial Grammar (CCG) parser (Clark and Curran, 2004b). The parser not only recovers the local dependencies output by treebank parsers such as Collins (2003), but also the long-range depdendencies inherent in constructions such as extraction and coordination. CCG is a lexicalized grammar formalism, so that each word in a sentence is assigned an elementary syntactic structure, in CCG’s case a lexical category expressing subcategorisation information. Statistical tagging techniques can assign lexical categories with high accuracy and low ambiguity (Curran et al., 2006). The combination of finite-state supertagging and highly engineered C++ leads to a parser which can analyse up to 30 sentences per second on standard hardware (Clark and Curran, 2004a). The C&C tools also contain a number of Maximum Entropy taggers, including the CCG supertagger, a POS tagger (Curran and Clark, 2003a), chunker, and named entity recogniser (Curran and Clark, 2003b). The taggers are highly efficient, with processing speeds of over 100,000 words per second. Finally, the various components, including the morphological analyser morpha (Minnen et al., 2001), are combined into a single program. The output from this program — a CCG derivation, POS tags, lemmas, and named entity tags — is used by the module Boxer (Bos, 2005) to produce interpretable structure in the form of Discourse Representation Structures (DRSs).", "title": "" }, { "docid": "7fa9bacbb6b08065ecfe0530f082a391", "text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.", "title": "" }, { "docid": "a9cc523da8b5348dede4765a6eb9e290", "text": "Recommender systems are efficient tools that overcome the information overload problem by providing users with the most relevant contents. This is generally done through user’s preferences/ratings acquired from log files of his former sessions. Besides these preferences, taking into account the interaction context of the user will improve the relevancy of recommendation process. In this paper, we propose a context-aware recommender system based on both user profile and context. The approach we present is based on a previous work on data personalization which leads to the definition of a Personalized Access Model that provides a set of personalization services. We show how these services can be deployed in order to provide advanced context-aware recommender systems.", "title": "" }, { "docid": "9f1d881193369f1b7417d71a9a62bc19", "text": "Neurofeedback (NFB) is a potential alternative treatment for children with ADHD that aims to optimize brain activity. Whereas most studies into NFB have investigated behavioral effects, less attention has been paid to the effects on neurocognitive functioning. The present randomized controlled trial (RCT) compared neurocognitive effects of NFB to (1) optimally titrated methylphenidate (MPH) and (2) a semi-active control intervention, physical activity (PA), to control for non-specific effects. Using a multicentre three-way parallel group RCT design, children with ADHD, aged 7–13, were randomly allocated to NFB (n = 39), MPH (n = 36) or PA (n = 37) over a period of 10–12 weeks. NFB comprised theta/beta training at CZ. The PA intervention was matched in frequency and duration to NFB. MPH was titrated using a double-blind placebo controlled procedure to determine the optimal dose. Neurocognitive functioning was assessed using parameters derived from the auditory oddball-, stop-signal- and visual spatial working memory task. Data collection took place between September 2010 and March 2014. Intention-to-treat analyses showed improved attention for MPH compared to NFB and PA, as reflected by decreased response speed during the oddball task [η p 2  = 0.21, p < 0.001], as well as improved inhibition, impulsivity and attention, as reflected by faster stop signal reaction times, lower commission and omission error rates during the stop-signal task (range η p 2  = 0.09–0.18, p values <0.008). Working memory improved over time, irrespective of received treatment (η p 2  = 0.17, p < 0.001). Overall, stimulant medication showed superior effects over NFB to improve neurocognitive functioning. Hence, the findings do not support theta/beta training applied as a stand-alone treatment in children with ADHD.", "title": "" }, { "docid": "bbd9f9608409f7fa58d8cdbd8aa93989", "text": "Competence-based theories of island effects play a central role in generative grammar, yet the graded nature of many syntactic islands has never been properly accounted for. Categorical syntactic accounts of island effects have persisted in spite of a wealth of data suggesting that island effects are not categorical in nature and that non-structural manipulations that leave island structures intact can radically alter judgments of island violations. We argue here, building on work by Deane, Kluender, and others, that processing factors have the potential to account for this otherwise unexplained variation in acceptability judgments.We report the results of self-paced reading experiments and controlled acceptability studies which explore the relationship between processing costs and judgments of acceptability. In each of the three self-paced reading studies, the data indicate that the processing cost of different types of island violations can be significantly reduced to a degree comparable to that of non-island filler-gap constructions by manipulating a single non-structural factor. Moreover, this reduction in processing cost is accompanied by significant improvements in acceptability. This evidence favors the hypothesis that island-violating constructions involve numerous processing pressures that aggregate to drive processing difficulty above a threshold so that a perception of unacceptability ensues. We examine the implications of these findings for the grammar of filler-gap dependencies.", "title": "" }, { "docid": "08804b3859d70c6212bba05c7e792f9a", "text": "Both linear mixed models (LMMs) and sparse regression models are widely used in genetics applications, including, recently, polygenic modeling in genome-wide association studies. These two approaches make very different assumptions, so are expected to perform well in different situations. However, in practice, for a given dataset one typically does not know which assumptions will be more accurate. Motivated by this, we consider a hybrid of the two, which we refer to as a \"Bayesian sparse linear mixed model\" (BSLMM) that includes both these models as special cases. We address several key computational and statistical issues that arise when applying BSLMM, including appropriate prior specification for the hyper-parameters and a novel Markov chain Monte Carlo algorithm for posterior inference. We apply BSLMM and compare it with other methods for two polygenic modeling applications: estimating the proportion of variance in phenotypes explained (PVE) by available genotypes, and phenotype (or breeding value) prediction. For PVE estimation, we demonstrate that BSLMM combines the advantages of both standard LMMs and sparse regression modeling. For phenotype prediction it considerably outperforms either of the other two methods, as well as several other large-scale regression methods previously suggested for this problem. Software implementing our method is freely available from http://stephenslab.uchicago.edu/software.html.", "title": "" }, { "docid": "94fd7030e7b638e02ca89f04d8ae2fff", "text": "State-of-the-art deep learning algorithms generally require large amounts of data for model training. Lack thereof can severely deteriorate the performance, particularly in scenarios with fine-grained boundaries between categories. To this end, we propose a multimodal approach that facilitates bridging the information gap by means of meaningful joint embeddings. Specifically, we present a benchmark that is multimodal during training (i.e. images and texts) and single-modal in testing time (i.e. images), with the associated task to utilize multimodal data in base classes (with many samples), to learn explicit visual classifiers for novel classes (with few samples). Next, we propose a framework built upon the idea of cross-modal data hallucination. In this regard, we introduce a discriminative text-conditional GAN for sample generation with a simple self-paced strategy for sample selection. We show the results of our proposed discriminative hallucinated method for 1-, 2-, and 5shot learning on the CUB dataset, where the accuracy is improved by employing multimodal data.", "title": "" }, { "docid": "11eba1f4575e548ffb0e557e9aee1bbe", "text": "Compressive sensing (CS) is an effective approach for fast Magnetic Resonance Imaging (MRI). It aims at reconstructing MR images from a small number of under-sampled data in k-space, and accelerating the data acquisition in MRI. To improve the current MRI system in reconstruction accuracy and speed, in this paper, we propose two novel deep architectures, dubbed ADMM-Nets in basic and generalized versions. ADMM-Nets are defined over data flow graphs, which are derived from the iterative procedures in Alternating Direction Method of Multipliers (ADMM) algorithm for optimizing a general CS-based MRI model. They take the sampled k-space data as inputs and output reconstructed MR images. Moreover, we extend our network to cope with complex-valued MR images. In the training phase, all parameters of the nets, e.g., transforms, shrinkage functions, etc., are discriminatively trained end-to-end. In the testing phase, they have computational overhead similar to ADMM algorithm but use optimized parameters learned from the data for CS-based reconstruction task. We investigate different configurations in network structures and conduct extensive experiments on MR image reconstruction under different sampling rates. Due to the combination of the advantages in model-based approach and deep learning approach, the ADMM-Nets achieve state-of-the-art reconstruction accuracies with fast computational speed.", "title": "" }, { "docid": "b640ed2bd02ba74ee0eb925ef6504372", "text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.", "title": "" }, { "docid": "c716e7dc1c0e770001bcb57eab871968", "text": "We present a new method to visualize from an ensemble of flow fields the statistical properties of streamlines passing through a selected location. We use principal component analysis to transform the set of streamlines into a low-dimensional Euclidean space. In this space the streamlines are clustered into major trends, and each cluster is in turn approximated by a multivariate Gaussian distribution. This yields a probabilistic mixture model for the streamline distribution, from which confidence regions can be derived in which the streamlines are most likely to reside. This is achieved by transforming the Gaussian random distributions from the low-dimensional Euclidean space into a streamline distribution that follows the statistical model, and by visualizing confidence regions in this distribution via iso-contours. We further make use of the principal component representation to introduce a new concept of streamline-median, based on existing median concepts in multidimensional Euclidean spaces. We demonstrate the potential of our method in a number of real-world examples, and we compare our results to alternative clustering approaches for particle trajectories as well as curve boxplots.", "title": "" }, { "docid": "b69aae02d366b75914862f5bc726c514", "text": "Nitrification in commercial aquaculture systems has been accomplished using many different technologies (e.g. trickling filters, fluidized beds and rotating biological contactors) but commercial aquaculture systems have been slow to adopt denitrification. Denitrification (conversion of nitrate, NO3 − to nitrogen gas, N2) is essential to the development of commercial, closed, recirculating aquaculture systems (B1 water turnover 100 day). The problems associated with manually operated denitrification systems have been incomplete denitrification (oxidation–reduction potential, ORP\\−200 mV) with the production of nitrite (NO2 ), nitric oxide (NO) and nitrous oxide (N2O) or over-reduction (ORPB−400 mV), resulting in the production of hydrogen sulfide (H2S). The need for an anoxic or anaerobic environment for the denitrifying bacteria can also result in lowered dissolved oxygen (DO) concentrations in the rearing tanks. These problems have now been overcome by the development of a computer automated denitrifying bioreactor specifically designed for aquaculture. The prototype bioreactor (process control version) has been in operation for 4 years and commercial versions of the bioreactor are now in continuous use; these bioreactors can be operated in either batch or continuous on-line modes, maintaining NO3 − concentrations below 5 ppm. The bioreactor monitors DO, ORP, pH and water flow rate and controls water pump rate and carbon feed rate. A fuzzy logic-based expert system replaced the classical process control system for operation of the bioreactor, continuing to optimize denitrification rates and eliminate discharge of toxic by-products (i.e. NO2 , NO, N2O or www.elsevier.nl/locate/aqua-online * Corresponding author. Tel.: +1-409-7722133; fax: +1-409-7726993. E-mail address: [email protected] (P.G. Lee) 0144-8609/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S0144 -8609 (00 )00046 -7 38 P.G. Lee et al. / Aquacultural Engineering 23 (2000) 37–59 H2S). The fuzzy logic rule base was composed of \\40 fuzzy rules; it took into account the slow response time of the system. The fuzzy logic-based expert system maintained nitrate-nitrogen concentration B5 ppm while avoiding any increase in NO2 or H2S concentrations. © 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "210e26d5d11582be68337a0cc387ab8e", "text": "This paper presents the results of experiments carried out with the goal of applying the machine learning techniques of reinforcement learning and neural networks with reinforcement learning to the game of Tetris. Tetris is a well-known computer game that can be played either by a single player or competitively with slight variations, toward the end of accumulating a high score or defeating the opponent. The fundamental hypothesis of this paper is that if the points earned in Tetris are used as the reward function for a machine learning agent, then that agent should be able to learn to play Tetris without other supervision. Toward this end, a state-space that summarizes the essential feature of the Tetris board is designed, high-level actions are developed to interact with the game, and agents are trained using Q-Learning and neural networks. As a result of these efforts, agents learn to play Tetris and to compete with other players. While the learning agents fail to accumulate as many points as the most advanced AI agents, they do learn to play more efficiently.", "title": "" }, { "docid": "3bf37b20679ca6abd022571e3356e95d", "text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.", "title": "" }, { "docid": "c16470d7aa166ccb2d2724835a0c3370", "text": "Currently, astronomical data have increased in terms of volume and complexity. To bring out the information in order to analyze and predict, the artificial intelligence techniques are required. This paper aims to apply artificial intelligence techniques to predict M-class solar flare. Artificial neural network, support vector machine and naïve bayes techniques are compared to define the best prediction performance accuracy technique. The dataset have been collected from daily data for 16 years, from 1998 to 2013. The attributes consist of solar flares data and sunspot number. The sunspots are a cooler spot on the surface of the sun, which have relation with solar flares. The Java-based machine learning WEKA is used for analysis and predicts solar flares. The best forecasted performance accuracy is achieved based on the artificial neural network method.", "title": "" }, { "docid": "0c9112aeebf0b43b577c2cfd5f121d39", "text": "The fundamental objective behind the present study is to demonstrate the visible effect of ComputerAssisted Instruction upon Iranian EFL learners' reading performance, and to see if it has any impact upon this skill in the Iranian EFLeducational settings. To this end, a sample of 50 male and female EFL learners was drawn from an English language institute in Iran. After participating in a proficiency pretest, these participants were assigned into two experimental and control groups, 25 and 25, respectively. An independent sample t-test was administered to find out if there were salient differences between the findings of the two selected groups in their reading test. The key research question was to see providing learners with computer-assisted instruction during the processes of learning and instruction for learners would have an affirmative influence upon the improvement and development of their reading skill. The results pinpointed computer-assisted instruction users' performance was meaningfully higher than that of nonusers (DF 1⁄4 48, P < 05). The consequences revealed that computer-assisted language learning and computer technology application have resulted in a greater promotion of students' reading improvement. In other words, computer-assisted instruction users outperformed the nonusers. The research, therefore, highlights the conclusion that EFL learners' use of computer-assisted instruction has the potential to promote more effective reading ability. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "98cc82852083eae53d06621f37cde9e5", "text": "Automatically recognizing a large number of action categories from videos is of significant importance for video understanding. Most existing works focused on the design of more discriminative feature representation, and have achieved promising results when the positive samples are enough. However, very limited efforts were spent on recognizing a novel action without any positive exemplars, which is often the case in the real settings due to the large amount of action classes and the users’ queries dramatic variations. To address this issue, we propose to perform action recognition when no positive exemplars of that class are provided, which is often known as the zero-shot learning. Different from other zero-shot learning approaches, which exploit attributes as the intermediate layer for the knowledge transfer, our main contribution is SIR, which directly leverages the semantic inter-class relationships between the known and unknown actions followed by label transfer learning. The inter-class semantic relationships are automatically measured by continuous word vectors, which learned by the skip-gram model using the large-scale text corpus. Extensive experiments on the UCF101 dataset validate the superiority of our method over fully-supervised approaches using few positive exemplars.", "title": "" }, { "docid": "19f4de5f01f212bf146087d4695ce15e", "text": "Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms. In comparison with existing VO and V-SLAM algorithms, semi-direct visual odometry (SVO) has two main advantages that lead to stateof-the-art frame rate camera motion estimation: direct pixel correspondence and efficient implementation of probabilistic mapping method. This paper improves the SVO mapping by initializing the mean and the variance of the depth at a feature location according to the depth prediction from a singleimage depth prediction network. By significantly reducing the depth uncertainty of the initialized map point (i.e., small variance centred about the depth prediction), the benefits are twofold: reliable feature correspondence between views and fast convergence to the true depth in order to create new map points. We evaluate our method with two outdoor datasets: KITTI dataset and Oxford Robotcar dataset. The experimental results indicate that the improved SVO mapping results in increased robustness and camera tracking accuracy.", "title": "" } ]
scidocsrr
7d4d1560fd706b595b9a32da96c69a05
Wireless Sensor and Networking Technologies for Swarms of Aquatic Surface Drones
[ { "docid": "3cb6ba4a950868c1d912b44b77b264be", "text": "With the popularity of winter tourism, the winter recreation activities have been increased day by day in alpine environments. However, large numbers of people and rescuers are injured and lost in this environment due to the avalanche accidents every year. Drone-based rescue systems are envisioned as a viable solution for saving lives in this hostile environment. To this aim, a European project named “Smart collaboration between Humans and ground-aErial Robots for imProving rescuing activities in Alpine environments (SHERPA)” has been launched with the objective to develop a mixed ground and aerial drone platform to support search and rescue activities in a real-world hostile scenarios. In this paper, we study the challenges of existing wireless technologies for enabling drone wireless communications in alpine environment. We extensively discuss about the positive and negative aspects of the standards according to the SHERPA network requirements. Then based on that, we choose Worldwide interoperability for Microwave Access network (WiMAX) as a suitable technology for drone communications in this environment. Finally, we present a brief discussion about the use of operating band for WiMAX and the implementation issues of SHERPA network. The outcomes of this research assist to achieve the goal of the SHERPA project.", "title": "" } ]
[ { "docid": "9973dab94e708f3b87d52c24b8e18672", "text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.", "title": "" }, { "docid": "83d330486c50fe2ae1d6960a4933f546", "text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.", "title": "" }, { "docid": "a9de29e1d8062b4950e5ab3af6bea8df", "text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.", "title": "" }, { "docid": "bf23473b7fe711e9dce9487c7df5b624", "text": "A focus on population health management is a necessary ingredient for success under value-based payment models. As part of that effort, nine ways to embrace technology can help healthcare organizations improve population health, enhance the patient experience, and reduce costs: Use predictive analytics for risk stratification. Combine predictive modeling with algorithms for financial risk management. Use population registries to identify care gaps. Use automated messaging for patient outreach. Engage patients with automated alerts and educational campaigns. Automate care management tasks. Build programs and organize clinicians into care teams. Apply new technologies effectively. Use analytics to measure performance of organizations and providers.", "title": "" }, { "docid": "b1d61ca503702f950ef1275b904850e7", "text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.", "title": "" }, { "docid": "65b843c30f69d33fa0c9aedd742e3434", "text": "The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by reusing models, and expanding user requirements that are best met by combining different modeling methods. There have been many published successes including supporting theory, conceptual frameworks, software tools, and case studies. Nonetheless, on an empirical basis, the published work suggests that correctly specifying model integration strategies remains challenging. This naturally raises a question that has not yet been answered in the literature, namely 'what is the computational difficulty of model integration?' This paper's contribution is to address this question with a time and space complexity analysis that concludes that deep model integration with proven correctness is both NP-complete and PSPACE-complete and that reducing this complexity requires sacrificing correctness proofs in favor of guidance from both subject matter experts and modeling specialists.", "title": "" }, { "docid": "08e02afe2ef02fc9c8fff91cf7a70553", "text": "Matrix factorization is a fundamental technique in machine learning that is applicable to collaborative filtering, information retrieval and many other areas. In collaborative filtering and many other tasks, the objective is to fill in missing elements of a sparse data matrix. One of the biggest challenges in this case is filling in a column or row of the matrix with very few observations. In this paper we introduce a Bayesian matrix factorization model that performs regression against side information known about the data in addition to the observations. The side information helps by adding observed entries to the factored matrices. We also introduce a nonparametric mixture model for the prior of the rows and columns of the factored matrices that gives a different regularization for each latent class. Besides providing a richer prior, the posterior distribution of mixture assignments reveals the latent classes. Using Gibbs sampling for inference, we apply our model to the Netflix Prize problem of predicting movie ratings given an incomplete user-movie ratings matrix. Incorporating rating information with gathered metadata information, our Bayesian approach outperforms other matrix factorization techniques even when using fewer dimensions.", "title": "" }, { "docid": "c0315ef3bcc21723131d9b2687a5d5d1", "text": "Network covert timing channels embed secret messages in legitimate packets by modulating interpacket delays. Unfortunately, such channels are normally implemented in higher network layers (layer 3 or above) and easily detected or prevented. However, access to the physical layer of a network stack allows for timing channels that are virtually invisible: Sub-microsecond modulations that are undetectable by software endhosts. Therefore, covert timing channels implemented in the physical layer can be a serious threat to the security of a system or a network. In fact, we empirically demonstrate an effective covert timing channel over nine routing hops and thousands of miles over the Internet (the National Lambda Rail). Our covert timing channel works with cross traffic, less than 10% bit error rate, which can be masked by forward error correction, and a covert rate of 81 kilobits per second. Key to our approach is access and control over every bit in the physical layer of a 10 Gigabit network stack (a bit is 100 picoseconds wide at 10 gigabit per seconds), which allows us to modulate and interpret interpacket spacings at sub-microsecond scale. We discuss when and how a timing channel in the physical layer works, how hard it is to detect such a channel, and what is required to do so.", "title": "" }, { "docid": "757cb3e9b279f71cb0a9ff5b80c5f4ba", "text": "When it comes to workplace preferences, Generation Y workers closely resemble Baby Boomers. Because these two huge cohorts now coexist in the workforce, their shared values will hold sway in the companies that hire them. The authors, from the Center for Work-Life Policy, conducted two large-scale surveys that reveal those values. Gen Ys and Boomers are eager to contribute to positive social change, and they seek out workplaces where they can do that. They expect flexibility and the option to work remotely, but they also want to connect deeply with colleagues. They believe in employer loyalty but desire to embark on learning odysseys. Innovative firms are responding by crafting reward packages that benefit both generations of workers--and their employers.", "title": "" }, { "docid": "21a2347f9bb5b5638d63239b37c9d0e6", "text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.", "title": "" }, { "docid": "cbe947b169331c8bb41c7fae2a8d0647", "text": "In spite of high levels of poverty in low and middle income countries (LMIC), and the high burden posed by common mental disorders (CMD), it is only in the last two decades that research has emerged that empirically addresses the relationship between poverty and CMD in these countries. We conducted a systematic review of the epidemiological literature in LMIC, with the aim of examining this relationship. Of 115 studies that were reviewed, most reported positive associations between a range of poverty indicators and CMD. In community-based studies, 73% and 79% of studies reported positive associations between a variety of poverty measures and CMD, 19% and 15% reported null associations and 8% and 6% reported negative associations, using bivariate and multivariate analyses respectively. However, closer examination of specific poverty dimensions revealed a complex picture, in which there was substantial variation between these dimensions. While variables such as education, food insecurity, housing, social class, socio-economic status and financial stress exhibit a relatively consistent and strong association with CMD, others such as income, employment and particularly consumption are more equivocal. There are several measurement and population factors that may explain variation in the strength of the relationship between poverty and CMD. By presenting a systematic review of the literature, this paper attempts to shift the debate from questions about whether poverty is associated with CMD in LMIC, to questions about which particular dimensions of poverty carry the strongest (or weakest) association. The relatively consistent association between CMD and a variety of poverty dimensions in LMIC serves to strengthen the case for the inclusion of mental health on the agenda of development agencies and in international targets such as the millenium development goals.", "title": "" }, { "docid": "c98e8abd72ba30e0d2cb2b7d146a3d13", "text": "Process mining techniques help organizations discover and analyze business processes based on raw event data. The recently released \"Process Mining Manifesto\" presents guiding principles and challenges for process mining. Here, the authors summarize the manifesto's main points and argue that analysts should take into account the context in which events occur when analyzing processes.", "title": "" }, { "docid": "1ef2bb601d91d77287d3517c73b453fe", "text": "Proteins from silver-stained gels can be digested enzymatically and the resulting peptide analyzed and sequenced by mass spectrometry. Standard proteins yield the same peptide maps when extracted from Coomassie- and silver-stained gels, as judged by electrospray and MALDI mass spectrometry. The low nanogram range can be reached by the protocols described here, and the method is robust. A silver-stained one-dimensional gel of a fraction from yeast proteins was analyzed by nano-electrospray tandem mass spectrometry. In the sequencing, more than 1000 amino acids were covered, resulting in no evidence of chemical modifications due to the silver staining procedure. Silver staining allows a substantial shortening of sample preparation time and may, therefore, be preferable over Coomassie staining. This work removes a major obstacle to the low-level sequence analysis of proteins separated on polyacrylamide gels.", "title": "" }, { "docid": "3f679dbd9047040d63da70fc9e977a99", "text": "In this paper we consider videos (e.g. Hollywood movies) and their accompanying natural language descriptions in the form of narrative sentences (e.g. movie scripts without timestamps). We propose a method for temporally aligning the video frames with the sentences using both visual and textual information, which provides automatic timestamps for each narrative sentence. We compute the similarity between both types of information using vectorial descriptors and propose to cast this alignment task as a matching problem that we solve via dynamic programming. Our approach is simple to implement, highly efficient and does not require the presence of frequent dialogues, subtitles, and character face recognition. Experiments on various movies demonstrate that our method can successfully align the movie script sentences with the video frames of movies.", "title": "" }, { "docid": "2d59fe09633ee41c60e9e951986e56a6", "text": "Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D face shapes and cascaded regression in 2D and 3D face shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. Unlike existing methods, the proposed method can fully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D face shapes and localize both visible and invisible 2D landmarks. Based on the PEN 3D face shapes, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.", "title": "" }, { "docid": "3c514740d7f8ce78f9afbaca92dc3b1c", "text": "In the Brazil nut problem (BNP), hard spheres with larger diameters rise to the top. There are various explanations (percolation, reorganization, convection), but a broad understanding or control of this effect is by no means achieved. A theory is presented for the crossover from BNP to the reverse Brazil nut problem based on a competition between the percolation effect and the condensation of hard spheres. The crossover condition is determined, and theoretical predictions are compared to molecular dynamics simulations in two and three dimensions.", "title": "" }, { "docid": "16d949f6915cbb958cb68a26c6093b6b", "text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.", "title": "" }, { "docid": "179d8daa30a7986c8f345a47eabfb2c8", "text": "A key advantage of taking a statistical approach to spoken dialogue systems is the ability to formalise dialogue policy design as a stochastic optimization problem. However, since dialogue policies are learnt by interactively exploring alternative dialogue paths, conventional static dialogue corpora cannot be used directly for training and instead, a user simulator is commonly used. This paper describes a novel statistical user model based on a compact stack-like state representation called a user agenda which allows state transitions to be modeled as sequences of push- and pop-operations and elegantly encodes the dialogue history from a user's point of view. An expectation-maximisation based algorithm is presented which models the observable user output in terms of a sequence of hidden states and thereby allows the model to be trained on a corpus of minimally annotated data. Experimental results with a real-world dialogue system demonstrate that the trained user model can be successfully used to optimise a dialogue policy which outperforms a hand-crafted baseline in terms of task completion rates and user satisfaction scores.", "title": "" }, { "docid": "d9fe0834ccf80bddadc5927a8199cd2c", "text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.", "title": "" }, { "docid": "be9b40cc2e2340249584f7324e26c4d3", "text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.", "title": "" } ]
scidocsrr
08e47c7470974c8abbc87fc2e85753a8
CloudSimDisk: Energy-Aware Storage Simulation in CloudSim
[ { "docid": "7d53fcce145badeeaeff55b5299010b9", "text": "Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.", "title": "" } ]
[ { "docid": "83a13b090260a464064a3c884a75ad91", "text": "While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called Word Mover’s Distance (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the Word Mover’s Embedding (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.", "title": "" }, { "docid": "266f636d13f406ecbacf8ed8443b2b5c", "text": "This review examines the most frequently cited sociological theories of crime and delinquency. The major theoretical perspectives are presented, beginning with anomie theory and the theories associated with the Chicago School of Sociology. They are followed by theories of strain, social control, opportunity, conflict, and developmental life course. The review concludes with a conceptual map featuring the inter-relationships and contexts of the major theoretical perspectives.", "title": "" }, { "docid": "a95761b5a67a07d02547c542ddc7e677", "text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; [email protected]. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.", "title": "" }, { "docid": "9a973833c640e8a9fe77cd7afdae60f2", "text": "Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.", "title": "" }, { "docid": "636f5002b3ced8a541df3e0568604f71", "text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.", "title": "" }, { "docid": "966c6c47b9b55fbbab7196622af7027b", "text": "Wotif Group used DevOps principles to recover from the downward spiral of manual release activity that many IT departments face. Its approach involved the idea of \"making it easy to do the right thing.\" By defining the right thing (deployment standards) for development and operations teams and making it easy to adopt, Wotif drastically improved the average release cycle time. This article is part of a theme issue on DevOps.", "title": "" }, { "docid": "46849f5c975551b401bccae27edd9d81", "text": "Many ideas of High Performance Computing are applicable to Big Data problems. The more so now, that hybrid, GPU computing gains traction in mainstream computing applications. This work discusses the differences between the High Performance Computing software stack and the Big Data software stack and then focuses on two popular computing workloads, the Alternating Least Squares algorithm and the Singular Value Decomposition, and shows how their performance can be maximized using hybrid computing techniques.", "title": "" }, { "docid": "8700c7f150c00013990c837a4bf7b655", "text": "The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.", "title": "" }, { "docid": "52d2004c762d4701ab275d9757c047fc", "text": "Somatic mosaicism — the presence of genetically distinct populations of somatic cells in a given organism — is frequently masked, but it can also result in major phenotypic changes and reveal the expression of otherwise lethal genetic mutations. Mosaicism can be caused by DNA mutations, epigenetic alterations of DNA, chromosomal abnormalities and the spontaneous reversion of inherited mutations. In this review, we discuss the human disorders that result from somatic mosaicism, as well as the molecular genetic mechanisms by which they arise. Specifically, we emphasize the role of selection in the phenotypic manifestations of mosaicism.", "title": "" }, { "docid": "5589dfc1ff9246b85e326e8f394cd514", "text": "justice. Women, by contrast, were believed to be at a lower stage because they were found to have a sense of agency still tied primarily to their social relationships and to make political and moral decisions based on context-specific principles based on these relationships rather than on the grounds of their own autonomous judgments. Students of gender studies know well just how busy social scientists have been kept by their efforts to come up with ever more sociological \"alibis\" for the question of why women did not act like men. Gilligan's response was to refuse the terms of the debate altogether. She thus did not develop yet another explanation for why women are \"deviant.\" Instead, she turned the question on its head by asking what was wrong with the theory a theory whose central premises defines 50% of social beings as \"abnormal.\" Gilligan translated this question into research by subjecting the abstraction of universal and discrete agency to comparative research into female behavior evaluated on its own terms The new research revealed women to be more \"concrete\" in their thinking and more attuned to \"fairness\" while men acted on \"abstract reasoning\" and \"rules of justice.\" These research findings transformed female otherness into variation and difference but difference now freed from the normative de-", "title": "" }, { "docid": "52da42b320e23e069519c228f1bdd8b5", "text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.", "title": "" }, { "docid": "0e3135a7846cee7f892b99dc4881b461", "text": "OBJECTIVE: This study examined the relation among children's physical activity, sedentary behaviours, and body mass index (BMI), while controlling for sex, family structure, and socioeconomic status.DESIGN: Epidemiological study examining the relations among physical activity participation, sedentary behaviour (video game use and television (TV)/video watching), and BMI on a nationally representative sample of Canadian children.SUBJECTS: A representative sample of Canadian children aged 7–11 (N=7216) from the 1994 National Longitudinal Survey of Children and Youth was used in the analysis.MEASUREMENTS: Physical activity and sport participation, sedentary behaviour (video game use and TV/video watching), and BMI measured by parental report.RESULTS: Both organized and unorganized sport and physical activity are negatively associated with being overweight (10–24% reduced risk) or obese (23–43% reduced risk), while TV watching and video game use are risk factors for being overweight (17–44% increased risk) or obese (10–61% increased risk). Physical activity and sedentary behaviour partially account for the association of high socioeconomic status and two-parent family structure with the likelihood of being overweight or obese.CONCLUSION: This study provides evidence supporting the link between physical inactivity and obesity of Canadian children.", "title": "" }, { "docid": "146402a4b52f16b583e224cbf9a84119", "text": "Many different methods to train deep generative models have been introduced in the past. In this paper, we propose to extend the variational auto-encoder (VAE) framework with a new type of prior which we call \"Variational Mixture of Posteriors\" prior, or VampPrior for short. The VampPrior consists of a mixture distribution (e.g., a mixture of Gaussians) with components given by variational posteriors conditioned on learnable pseudo-inputs. We further extend this prior to a two layer hierarchical model and show that this architecture with a coupled prior and posterior, learns significantly better models. The model also avoids the usual local optima issues related to useless latent dimensions that plague VAEs. We provide empirical studies on six datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes, Frey Faces and Histopathology patches, and show that applying the hierarchical VampPrior delivers state-of-the-art results on all datasets in the unsupervised permutation invariant setting and the best results or comparable to SOTA methods for the approach with convolutional networks.", "title": "" }, { "docid": "981e88bd1f4187972f8a3d04960dd2dd", "text": "The purpose of this study is to examine the appropriateness and effectiveness of the assistive use of robot projector based augmented reality (AR) to children’s dramatic activity. A system that employ a mobile robot mounted with a projector-camera is used to help manage children’s dramatic activity by projecting backdrops and creating a synthetic video imagery, where e.g. children’s faces is replaced with graphic characters. In this Delphi based study, a panel consist of 33 professionals include 11children education experts (college professors majoring in early childhood education), children field educators (kindergarten teachers and principals), and 11 AR and robot technology experts. The experts view the excerpts from the video taken from the actual usage situation. In the first stage of survey, we collect the panel's perspectives on applying the latest new technologies for instructing dramatic activity to children using an open ended questionnaire. Based on the results of the preliminary survey, the subsequent questionnaires (with 5 point Likert scales) are developed for the second and third in-depth surveys. In the second survey, 36 questions is categorized into 5 areas: (1) developmental and educational values, (2) impact on the teacher's role, (3) applicability and special considerations in the kindergarten, (4) external environment and required support, and (5) criteria for the selection of the story in the drama activity. The third survey mainly investigate how AR or robots can be of use in children’s dramatic activity in other ways (than as originally given) and to other educational domains. The surveys show that experts most appreciated the use of AR and robot for positive educational and developmental effects due to the children’s keen interests and in turn enhanced immersion into the dramatic activity. Consequently, the experts recommended that proper stories, scenes and technological realizations need to be selected carefully, in the light of children’s development, while lever aging on strengths of the technologies used.", "title": "" }, { "docid": "adce2e04608819ad5cf30452bd864226", "text": "Throughout the history of mathematics, concepts of number and space have been tightly intertwined. We tested the hypothesis that cortical circuits for spatial attention contribute to mental arithmetic in humans. We trained a multivariate classifier algorithm to infer the direction of an eye movement, left or right, from the brain activation measured in the posterior parietal cortex. Without further training, the classifier then generalized to an arithmetic task. Its left versus right classification could be used to sort out subtraction versus addition trials, whether performed with symbols or with sets of dots. These findings are consistent with the suggestion that mental arithmetic co-opts parietal circuitry associated with spatial coding.", "title": "" }, { "docid": "3c28ee0844687013d5ac5a88ee529d60", "text": "Kohonen's Self-Organizing Map (SOM) is one of the most popular arti cial neural network algorithms. Word category maps are SOMs that have been organized according to word similarities, measured by the similarity of the short contexts of the words. Conceptually interrelated words tend to fall into the same or neighboring map nodes. Nodes may thus be viewed as word categories. Although no a priori information about classes is given, during the self-organizing process a model of the word classes emerges. The central topic of the thesis is the use of the SOM in natural language processing. The approach based on the word category maps is compared with the methods that are widely used in arti cial intelligence research. Modeling gradience, conceptual change, and subjectivity of natural language interpretation are considered. The main application area is information retrieval and textual data mining for which a speci c SOM-based method called the WEBSOM has been developed. The WEBSOM method organizes a document collection on a map display that provides an overview of the collection and facilitates interactive browsing. 1", "title": "" }, { "docid": "0ff483e916f4f7eda4671ba31b60d160", "text": "Nowadays, the rapid proliferation of data makes it possible to build complex models for many real applications. Such models, however, usually require large amount of labeled data, and the labeling process can be both expensive and tedious for domain experts. To address this problem, researchers have resorted to crowdsourcing to collect labels from non-experts with much less cost. The key challenge here is how to infer the true labels from the large number of noisy labels provided by non-experts. Different from most existing work on crowdsourcing, which ignore the structure information in the labeling data provided by non-experts, in this paper, we propose a novel structured approach based on tensor augmentation and completion. It uses tensor representation for the labeled data, augments it with a ground truth layer, and explores two methods to estimate the ground truth layer via low rank tensor completion. Experimental results on 6 real data sets demonstrate the superior performance of the proposed approach over state-of-the-art techniques.", "title": "" }, { "docid": "6b0b0483cf5eeba1bcee560835651a0e", "text": "Four experiments were carried out to investigate an early- versus late-selection explanation for the attentional blink (AB). In both Experiments 1 and 2, 3 groups of participants were required to identify a noun (Experiment 1) or a name (Experiment 2) target (experimental conditions) and then to identify the presence or absence of a 2nd target (probe), which was their own name, another name, or a specified noun from among a noun distractor stream (Experiment 1) or a name distractor stream (Experiment 2). The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns. In Experiments 3 and 4, either the participant's own name or another name was presented, as the target and as the item that immediately followed the target, respectively. An AB effect was revealed in both experimental conditions. The results of these experiments are interpreted as support for a late-selection interference account of the AB.", "title": "" }, { "docid": "6e4bb5d16c72c8dc706f934fa3558adb", "text": "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouvé (1995) in which two images I 0, I 1 are given and connected via the diffeomorphic change of coordinates I 0○ϕ−1=I 1 where ϕ=Φ1 is the end point at t= 1 of curve Φ t , t∈[0, 1] satisfying .Φ t =v t (Φ t ), t∈ [0,1] with Φ0=id. The variational problem takes the form $$\\mathop {\\arg {\\text{m}}in}\\limits_{\\upsilon :\\dot \\phi _t = \\upsilon _t \\left( {\\dot \\phi } \\right)} \\left( {\\int_0^1 {\\left\\| {\\upsilon _t } \\right\\|} ^2 {\\text{d}}t + \\left\\| {I_0 \\circ \\phi _1^{ - 1} - I_1 } \\right\\|_{L^2 }^2 } \\right),$$ where ‖v t‖ V is an appropriate Sobolev norm on the velocity field v t(·), and the second term enforces matching of the images with ‖·‖L 2 representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields v t, t∈[0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by ∫0 1‖v t‖ V dt on the geodesic shortest paths.", "title": "" }, { "docid": "2e2cffc777e534ad1ab7a5c638e0574e", "text": "BACKGROUND\nPoly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized.\n\n\nPATIENTS AND METHODS\nQuantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome.\n\n\nRESULTS\nPARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52).\n\n\nCONCLUSIONS\nNuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.", "title": "" } ]
scidocsrr
d20311cd85785a8283b2c0a956867149
Smart City as a Service (SCaaS): A Future Roadmap for E-Government Smart City Cloud Computing Initiatives
[ { "docid": "0521f79f13cdbe05867b5db733feac16", "text": "This conceptual paper discusses how we can consider a particular city as a smart one, drawing on recent practices to make cities smart. A set of the common multidimensional components underlying the smart city concept and the core factors for a successful smart city initiative is identified by exploring current working definitions of smart city and a diversity of various conceptual relatives similar to smart city. The paper offers strategic principles aligning to the three main dimensions (technology, people, and institutions) of smart city: integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement.", "title": "" }, { "docid": "aa32bff910ce6c7b438dc709b28eefe3", "text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science", "title": "" }, { "docid": "8654b5134dadc076a6298526e60f66fb", "text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.", "title": "" } ]
[ { "docid": "2e9f6ac770ddeb9bbc50d9c55b4131f9", "text": "IEEE 802.15.4 standard for Low Power Wireless Personal Area Networks (LoWPANs) is emerging as a promising technology to bring envisioned ubiquitous paragon, into realization. Considerable efforts are being carried on to integrate LoWPANs with other wired and wireless IP networks, in order to make use of pervasive nature and existing infrastructure associated with IP technologies. Designing a security solution becomes a challenging task as this involves threats from wireless domain of resource constrained devices as well as from extremely mature IP domain. In this paper we have i) identified security threats and requirements for LoWPANs ii) analyzed current security solutions and identified their shortcomings, iii) proposed a generic security framework that can be modified according to application requirements to provide desired level of security. We have also given example implementation scenario of our proposed framework for resource and security critical applications.", "title": "" }, { "docid": "87a319361ad48711eff002942735258f", "text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned", "title": "" }, { "docid": "1ed93d114804da5714b7b612f40e8486", "text": "Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.", "title": "" }, { "docid": "3e98e6e61992d73d4b62cbf0b4e8fac2", "text": "Privacy decision making has been investigated in the Information Systems literature using two contrasting frameworks. A first framework has largely focused on deliberative, rational processes by which individuals weigh the expected benefits of privacy allowances and disclosure against their resulting costs. Under this framework, consumer privacy decision making is broadly constructed as driven by stable, and therefore predictable, individual preferences for privacy. More recently, a second framework has leveraged theories and results from behavioral decision research to construe privacy decision making as a process in which cognitive heuristics and biases often occur, and individuals are significantly influenced by non-normative factors in choosing what to reveal or to protect about themselves. In three experiments, we combine and contrast these two perspectives by evaluating the impact of changes in objective risk of disclosure (normative factors), and the impact of changes in relative, and in particular reference-dependent, perceptions of risk (non-normative factors) on individual privacy decision making. We find that both relative and objective risks can impact individual privacy decisions. However, and surprisingly, we find that in experiments more closely modeled on real world contexts, and in experiments that capture actual privacy decisions as opposed to hypothetical choices, relative risk is a more pronounced driver of privacy decisions compared to objective risk. Our results suggest that while normative factors can influence consumers’ self-predicted, hypothetical behavior, nonnormative factors may sometimes be more important and consistent drivers of actual privacy choices.", "title": "" }, { "docid": "11d3dc9169c914bfdff66d1d9afddfaf", "text": "As most modern cryptographic Radio Frequency Identification (RFID) devices are based on ciphers that are secure from a purely theoretical point of view, e.g., (Triple-)DES or AES, adversaries have been adopting new methods to extract secret information and cryptographic keys from contactless smartcards: Side-Channel Analysis (SCA) targets the physical implementation of a cipher and allows to recover secret keys by exploiting a side-channel, for instance, the electro-magnetic (EM) emanation of an Integrated Circuit (IC). In this paper we present an analog demodulator specifically designed for refining the SCA of contactless smartcards. The customized analogue hardware increases the quality of EM measurements, facilitates the processing of the side-channel leakage and can serve as a plug-in component to enhance any existing SCA laboratory. Employing it to obtain power profiles of several real-world cryptographic RFIDs, we demonstrate the effectiveness of our measurement setup and evaluate the improvement of our new analog technique compared to previously proposed approaches. Using the example of the popular Mifare DESFire MF3ICD40 contactless smartcard, we show that commercial RFID devices are susceptible to the proposed SCA methods. The security analyses presented in this paper do not require expensive equipment and demonstrate that SCA poses a severe threat to many real-world systems. This novel attack vector has to be taken into account when employing contactless smartcards in security-sensitive applications, e.g., for wireless payment or identification.", "title": "" }, { "docid": "d612aeb7f7572345bab8609571f4030d", "text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.", "title": "" }, { "docid": "83e4ee7cf7a82fcb8cb77f7865d67aa8", "text": "A meta-analysis of the relationship between class attendance in college and college grades reveals that attendance has strong relationships with both class grades (k = 69, N = 21,195, r = .44) and GPA (k = 33, N = 9,243, r = .41). These relationships make class attendance a better predictor of college grades than any other known predictor of academic performance, including scores on standardized admissions tests such as the SAT, high school GPA, study habits, and study skills. Results also show that class attendance explains large amounts of unique variance in college grades because of its relative independence from SAT scores and high school GPA and weak relationship with student characteristics such as conscientiousness and motivation. Mandatory attendance policies appear to have a small positive impact on average grades (k = 3, N = 1,421, d = .21). Implications for theoretical frameworks of student academic performance and educational policy are discussed. Many college instructors exhort their students to attend class as frequently as possible, arguing that high levels of class attendance are likely to increase learning and improve student grades. Such arguments may hold intuitive appeal and are supported by findings linking class attendance to both learning (e.g., Jenne, 1973) and better grades (e.g., Moore et al., 2003), but both students and some educational researchers appear to be somewhat skeptical of the importance of class attendance. This skepticism is reflected in high class absenteeism rates ranging from 18. This article aims to help resolve the debate regarding the importance of class attendance by providing a quantitative review of the literature investigating the relationship of class attendance with both college grades and student characteristics that may influence attendance. 273 At a theoretical level class attendance fits well into frameworks that emphasize the joint role of cognitive ability and motivation in determining learning and work performance (e.g., Kanfer & Ackerman, 1989). Specifically, cognitive ability and motivation influence academic outcomes via two largely distinct mechanisms— one mechanism related to information processing and the other mechanism being behavioral in nature. Cognitive ability influences the degree to which students are able to process, integrate, and remember material presented to them (Humphreys, 1979), a mechanism that explains the substantial predictive validity of SAT scores for college grades (e. & Ervin, 2000). Noncognitive attributes such as conscientiousness and achievement motivation are thought to influence grades via their influence on behaviors that facilitate the understanding and …", "title": "" }, { "docid": "16e6acd62753e8c0c206bde20f3cbe52", "text": "In this paper we focus our attention on the comparison of various lemmatization and stemming algorithms, which are often used in nature language processing (NLP). Sometimes these two techniques are considered to be identical, but there is an important difference. Lemmatization is generally more utilizable, because it produces the basic word form which is required in many application areas (i.e. cross-language processing and machine translation). However, lemmatization is a difficult task especially for highly inflected natural languages having a lot of words for the same normalized word form. We present a novel lemmatization algorithm which utilizes the multilingual semantic thesaurus Eurowordnet (EWN). We describe the algorithm in detail and compare it with other widely used algorithms for word normalization on two different corpora. We present promising results obtained by our EWN-based lemmatization approach in comparison to other techniques. We also discuss the influence of the word normalization on classification task in general. In overall, the performance of our method is good and it achieves similar precision and recall in comparison with other word normalization methods. However, our experiments indicate that word normalization does not affect the text classification task significantly.", "title": "" }, { "docid": "e9b78d6f0fd98d5ee27bc08864cdb6a1", "text": "Mathematical models play a pivotal role in understanding and designing advanced low-power wireless systems. However, the distributed and uncoordinated operation of traditional multi-hop low-power wireless protocols greatly complicates their accurate modeling. This is mainly because these protocols build and maintain substantial network state to cope with the dynamics of low-power wireless links. Recent protocols depart from this design by leveraging synchronous transmissions (ST), whereby multiple nodes simultaneously transmit towards the same receiver, as opposed to pair wise link-based transmissions (LT). ST improve the one-hop packet reliability to an extent that efficient multi-hop protocols with little network state are feasible. This paper studies whether ST also enable simple yet accurate modeling of these protocols. Our contribution to this end is two-fold. First, we show, through experiments on a 139-node test bed, that characterizing packet receptions and losses as a sequence of independent and identically distributed (i.i.d.) Bernoulli trials-a common assumption in protocol modeling but often illegitimate for LT-is largely valid for ST. We then show how this finding simplifies the modeling of a recent ST-based protocol, by deriving (i) sufficient conditions for probabilistic guarantees on the end-to-end packet reliability, and (ii) a Markovian model to estimate the long-term energy consumption. Validation using test bed experiments confirms that our simple models are also highly accurate, for example, the model error in energy against real measurements is 0.25%, a figure never reported before in the related literature.", "title": "" }, { "docid": "5aa8fb560e7d5c2621054da97c30ffec", "text": "PURPOSE\nThe aim of this meta-analysis was to evaluate different methods for guided bone regeneration using collagen membranes and particulate grafting materials in implant dentistry.\n\n\nMATERIALS AND METHODS\nAn electronic database search and hand search were performed for all relevant articles dealing with guided bone regeneration in implant dentistry published between 1980 and 2014. Only randomized clinical trials and prospective controlled studies were included. The primary outcomes of interest were survival rates, membrane exposure rates, bone gain/defect reduction, and vertical bone loss at follow-up. A meta-analysis was performed to determine the effects of presence of membrane cross-linking, timing of implant placement, membrane fixation, and decortication.\n\n\nRESULTS\nTwenty studies met the inclusion criteria. Implant survival rates were similar between simultaneous and subsequent implant placement. The membrane exposure rate of cross-linked membranes was approximately 30% higher than that of non-cross-linked membranes. The use of anorganic bovine bone mineral led to sufficient newly regenerated bone and high implant survival rates. Membrane fixation was weakly associated with increased vertical bone gain, and decortication led to higher horizontal bone gain (defect depth).\n\n\nCONCLUSION\nGuided bone regeneration with particulate graft materials and resorbable collagen membranes is an effective technique for lateral alveolar ridge augmentation. Because implant survival rates for simultaneous and subsequent implant placement were similar, simultaneous implant placement is recommended when possible. Additional techniques like membrane fixation and decortication may represent beneficial implications for the practice.", "title": "" }, { "docid": "ffccfdc91a1c0b30cf98d0461149580b", "text": "This paper presents design guidelines for ultra-low power Low Noise Amplifier (LNA) design by comparing input matching, gain, and noise figure (NF) characteristics of common-source (CS) and common-gate (CG) topologies. A current-reused ultra-low power 2.2 GHz CG LNA is proposed and implemented based on 0.18 um CMOS technology. Measurement results show 13.9 dB power gain, 5.14 dB NF, and −9.3 dBm IIP3, respectively, while dissipating 140 uA from a 1.5 V supply, which shows best figure of merit (FOM) among all published ultra-low power LNAs.", "title": "" }, { "docid": "c3b05f287192be94c6f3ea5a13d6ec5d", "text": "Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often cumbersome and unnatural. In this paper, we propose a new probabilistic eye gaze tracking system without explicit personal calibration. Unlike the traditional eye gaze tracking methods, which estimate the eye parameter deterministically, our approach estimates the probability distributions of the eye parameter and the eye gaze, by combining image saliency with the 3D eye model. By using an incremental learning framework, the subject doesn't need personal calibration before using the system. His/her eye parameter and gaze estimation can be improved gradually when he/she is naturally viewing a sequence of images on the screen. The experimental result shows that the proposed system can achieve less than three degrees accuracy for different people without calibration.", "title": "" }, { "docid": "e13874aa8c3fe19bb2a176fd3a039887", "text": "As a typical deep learning model, Convolutional Neural Network (CNN) has shown excellent ability in solving complex classification problems. To apply CNN models in mobile ends and wearable devices, a fully pipelined hardware architecture adopting a Row Processing Tree (RPT) structure with small memory resource consumption between convolutional layers is proposed. A modified Row Stationary (RS) dataflow is implemented to evaluate the RPT architecture. Under the the same work frequency requirement for these two architectures, the experimental results show that the RPT architecture reduces 91% on-chip memory and 75% DRAM bandwidth compared with the modified RS dataflow, but the throughput of the modified RS dataflow is 3 times higher than the our proposed RPT architecture. The RPT architecture can achieve 121fps at 100MHZ while processing a CNN including 4 convolutional layers.", "title": "" }, { "docid": "f79def9a56be8d91c81385abfc6dbee7", "text": "Computational Creativity is the AI subfield in which we study how to build computational models of creative thought in science and the arts. From an engineering perspective, it is des irable to have concrete measures for assessing the progress made from one version of a program to another, or for comparing and contras ting different software systems for the same creative task. We de scribe the Turing Test and versions of it which have been used in orde r to measure progress in Computational Creativity. We show th at the versions proposed thus far lack the important aspect of inte rac ion, without which much of the power of the Turing Test is lost. We a rgue that the Turing Test is largely inappropriate for the purpos es of evaluation in Computational Creativity, since it attempts to ho mogenise creativity into a single (human) style, does not take into ac count the importance of background and contextual information for a c eative act, encourages superficial, uninteresting advances in fro nt-ends, and rewards creativity which adheres to a certain style over tha t which creates something which is genuinely novel. We further argu e that although there may be some place for Turing-style tests for C omputational Creativity at some point in the future, it is curren tly untenable to apply any defensible version of the Turing Test. As an alternative to Turing-style tests, we introduce two de scriptive models for evaluating creative software, the FACE mode l which describes creative acts performed by software in terms of tu ples of generative acts, and the IDEA model which describes how such creative acts can have an impact upon an ideal audience, given id eal information about background knowledge and the software de v lopment process. While these models require further study and e l boration, we believe that they can be usefully applied to current sys ems as well as guiding further development of creative systems. 1 The Turing Test and Computational Creativity The Turing Test (TT), in which a computer and human are interr ogated, with the computer considered intelligent if the huma n interrogator is unable to distinguish between them, is principal ly a philosophical construct proposed by Alan Turing as a way of determ ining whether AI has achieved its goal of simulating intelligence [1]. The TT has provoked much discussion, both historical and contem porary, however this has principally been within the philosophy of A I: most AI researchers see it as a distraction from their goals, enco uraging a mere trickery of intelligence and ever more sophisticated n atural language front ends, as opposed to focussing on real problems. D espite the appeal of the (as yet unawarded) Loebner Prize, most subfi elds of AI have developed and follow their own evaluation criteri a and methodologies, which have little to do with the TT. 1 School of Informatics, University of Edinburgh, UK 2 Department of Computing, Imperial College, London, UK Computational Creativity (CC) is a subfield of AI, in which re searchers aim to model creative thought by building program s which can produce ideas and artefacts which are novel, surprising and valuable, either autonomously or in conjunction with humans. Th ere are three main motivations for the study of Computational Creat ivity: • to provide a computational perspective on human creativity , in order to help us to understand it (cognitive science); • to enable machines to be creative, in order to enhance our liv es in some way (engineering); and • to produce tools which enhance human creativity (aids for cr eative individuals). Creativity can be subdivided into everyday problem-solvin g, and the sort of creativity reserved for the truly great, in which a problem is solved or an object created that has a major impact on other people. These are respectively known as “little-c” (mundane) a nd “bigC” (eminent) creativity [2]. Boden [3] draws a similar disti nction in her view of creativity as search within a conceptual space, w h re “exploratory creativity” searches within the space, and “tran sformational creativity” involves expanding the space by breaking one or m e of the defining characteristics and creating a new conceptua l space. Boden sees transformational creativity as more surprising , i ce, according to the defining rules of the conceptual space, ideas w ithin this space could not have been found before. There are two notions of evaluation in CC: ( i) judgements which determine whether an idea or artefact is valuable or not (an e ssential criterion for creativity) – these judgements may be made int rnally by whoever produced the idea, or externally, by someone else and (ii ) judgements to determine whether a system is acting creativ ely or not. In the following discussion, by evaluation, we mean the latter judgement. Finding measures of evaluation of CC is an active area of research, both influenced by, and influencing, practical a nd theoretical aspects of CC. It is a particularly important area, s ince such measures suggest ways of defining progress in the field, 3 as well as strongly guiding program design. While tests of creativity in humans are important for our understanding of creativity, they do n ot usually causehumans to be creative (creativity training programs, which train people to do well at such tests, notwithstanding). Way s in which CC is evaluated, on the other hand, will have a deep influence o future development of potentially creative programs. Clearl y, different modes of evaluation will be appropriate for the different mo tivations listed above. 3 The necessity for good measures of evaluation in CC is somewh at paralleled in the psychology of creativity: “Creativity is becoming a p opular topic in educational, economic and political circles throughout th e world – whether this popularity is just a passing fad or a lasting change in in terest in creativity and innovation will probably depend, in large part, on wh ether creativity assessment keeps pace with the rest of the field.” [4, p. 64] The Turing Test is of particular interest to CC for two reason s. Firstly, unlike the general situation in AI, the TT, or varia tions of it, arecurrently being used to evaluate candidate programs in CC. T hus, the TT is having a major influence on the development of CC. Thi s influence is usually neither noted nor questioned. Secondly , there are huge philosophical problems with using a test based on imita tion to evaluate competence in an area of thought which is based on or iginality. While there are varying definitions of creativity, t he majority consider some interpretation of novelty and utility to be es sential criteria. For instance, one of the commonalities found by Rothe nberg in a collection of international perspectives on creativit y is that “creativity involves thinking that is aimed at producing ideas o r products that are relatively novel” [5, p.2], and in CC the combin ation of novelty and usefulness is accepted as key (for instance, s ee [6] or [3]). In [4], Plucker and Makel list “similar, overlapping a nd possibly synonymous terms for creativity: imagination, ingenuity, innovation, inspiration, inventiveness, muse, novelty, originality, serendipity, talent and unique”. The term ‘imitation’ is simply antipodal to many of these terms. In the following sections, we firstly describe and discuss so me attempts to evaluate Computational Creativity using the Turi ng Test or versions of it ( §2), concluding that these attempts all omit the important aspect of interaction, and suggest the sort of directio n that a TT for a creative computer art system might follow. We then pres ent a series of arguments that the TT is inappropriate for measuring creativity in computers (or humans) in §3, and suggest that although there may be some place for Turing-style tests for Computational C reativity at some point in the future, it is currently untenable and impractical. As an alternative to Turing-style tests, in §4, we introduce two descriptive models for evaluating creative software, the F ACE model which describes creative acts performed by software in term s of tuples of generative acts, and the IDEA model which describes h ow such creative acts can have an impact upon an ideal audience, given ideal information about background knowledge and the softw are development process. We conclude our discussion in §5. 2 Attempts to evaluate Computational Creativity using the Turing Test or versions of it There have been several attempts to evaluate Computational Cre tivity using the Turing Test or versions of it. While these are us f l in terms of advancing our understanding of CC, they do not go f ar enough. In this section we discuss two such advances ( §2.1 and§2.2), and two further suggestions on using human creative behavio ur as a guide for evaluating Computational Creativity ( §2.3). We highlight the importance of interaction in §2.4. 2.1 Discrimination tests Pearce and Wiggins [7] assert for the need for objective, fal sifi ble measures of evaluation in cognitive musicology. They propo se the ‘discrimination test’, which is analogous to the TT, in whic subjects are played segments of both machine and human-generated mus ic and asked to distinguish between them. This might be in a part icular style, such as Bach’s music, or might be more general. The y also present one of the most considered analyses of whether Turin g-style tests such as the framework they propose might be appropriat e for evaluating Computational Creativity [7, §7]. While they do not directly refer to Boden’s exploratory creativity [3], instea d referring to Boden’s distinction between psychological (P-creativity , concerning ideas which are novel with resepct to a particular mind) and h istorical creativity (H-creativity, concerning ideas which are novel with respect to the whole of human history ), they do argue that much creative work is carried out within a particular style. They cite Garnham’s response ", "title": "" }, { "docid": "34cab0c02d5f5ec5183bd63c01f932c7", "text": "Autogynephilia is defined as a male’s propensity to be sexually aroused by the thought or image of himself as female. Autogynephilia explains the desire for sex reassignment of some maleto-female (MtF) transsexuals. It can be conceptualized as both a paraphilia and a sexual orientation. The concept of autogynephilia provides an alternative to the traditional model of transsexualism that emphasizes gender identity. Autogynephilia helps explain mid-life MtF gender transition, progression from transvestism to transsexualism, the prevalence of other paraphilias among MtF transsexuals, and late development of sexual interest in male partners. Hormone therapy and sex reassignment surgery can be effective treatments in autogynephilic transsexualism. The concept of autogynephilia can help clinicians better understand MtF transsexual clients who recognize a strong sexual component to their gender dysphoria. (Journal of Gay & Lesbian Psychotherapy, 8(1/2), 2004, pp. 69-87.)", "title": "" }, { "docid": "9533193407869250854157e89d2815eb", "text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.", "title": "" }, { "docid": "75e5480b6a319e1c879eba50604a4f91", "text": "Quantum circuits are time-dependent diagrams describing the process of quantum computation. Usually, a quantum algorithm must be mapped into a quantum circuit. Optimal synthesis of quantum circuits is intractable, and heuristic methods must be employed. With the use of heuristics, the optimality of circuits is no longer guaranteed. In this paper, we consider a local optimization technique based on templates to simplify and reduce the depth of nonoptimal quantum circuits. We present and analyze templates in the general case and provide particular details for the circuits composed of NOT, CNOT, and controlled-sqrt-of-NOT gates. We apply templates to optimize various common circuits implementing multiple control Toffoli gates and quantum Boolean arithmetic circuits. We also show how templates can be used to compact the number of levels of a quantum circuit. The runtime of our implementation is small, whereas the reduction in the number of quantum gates and number of levels is significant.", "title": "" }, { "docid": "a7eec693523207e6a9547000c1fbf306", "text": "Articulated hand tracking systems have been commonly used in virtual reality applications, including systems with human-computer interaction or interaction with game consoles. However, building an effective real-time hand pose tracker remains challenging. In this paper, we present a simple and efficient methodology for tracking and reconstructing 3d hand poses using a markered optical motion capture system. Markers were positioned at strategic points, and an inverse kinematics solver was incorporated to fit the rest of the joints to the hand model. The model is highly constrained with rotational and orientational constraints, allowing motion only within a feasible set. The method is real-time implementable and the results are promising, even with a low frame rate.", "title": "" }, { "docid": "f56c5a623b29b88f42bf5d6913b2823e", "text": "We describe a novel interface for composition of polygonal meshes based around two artist-oriented tools: Geometry Drag-and-Drop and Mesh Clone Brush. Our drag-and-drop interface allows a complex surface part to be selected and interactively dragged to a new location. We automatically fill the hole left behind and smoothly deform the part to conform to the target surface. The artist may increase the boundary rigidity of this deformation, in which case a fair transition surface is automatically computed. Our clone brush allows for transfer of surface details with precise spatial control. These tools support an interaction style that has not previously been demonstrated for 3D surfaces, allowing detailed 3D models to be quickly assembled from arbitrary input meshes. We evaluated this interface by distributing a basic tool to computer graphics hobbyists and professionals, and based on their feedback, describe potential workflows which could utilize our techniques.", "title": "" }, { "docid": "4cff5279110ff2e45060f3ccec7d51ba", "text": "Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)", "title": "" } ]
scidocsrr
55d61afb968056897082bee6d686cfb8
Index Modulation with PAPR and Beamforming for 5G MIMO-OFDM
[ { "docid": "8273a154d8e8b94873c4c94c4ff6ed14", "text": "The ambitious goals set for 5G wireless networks, which are expected to be introduced around 2020, require dramatic changes in the design of different layers for next generation communications systems. Massive MIMO systems, filter bank multi-carrier modulation, relaying technologies, and millimeter-wave communications have been considered as some of the strong candidates for the physical layer design of 5G networks. In this article, we shed light on the potential and implementation of IM techniques for MIMO and multi-carrier communications systems, which are expected to be two of the key technologies for 5G systems. Specifically, we focus on two promising applications of IM: spatial modulation and orthogonal frequency-division multiplexing with IM, and discuss the recent advances and future research directions in IM technologies toward spectrum- and energy-efficient 5G wireless networks.", "title": "" } ]
[ { "docid": "5ed24bc652901423b5f2922c41b2702b", "text": "We put forward a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology. As we argue, there are several reasons to prefer an editable blockchain, spanning from the necessity to remove inappropriate content and the possibility to support applications requiring re-writable storage, to \"the right to be forgotten.\" Our approach generically leverages so-called chameleon hash functions (Krawczyk and Rabin, NDSS '00), which allow determining hash collisions efficiently, given a secret trapdoor information. We detail how to integrate a chameleon hash function in virtually any blockchain-based technology, for both cases where the power of redacting the blockchain content is in the hands of a single trusted entity and where such a capability is distributed among several distrustful parties (as is the case with Bitcoin). We also report on a proof-of-concept implementation of a redactable blockchain, building on top of Nakamoto's Bitcoin core. The prototype only requires minimal changes to the way current client software interprets the information stored in the blockchain and to the current blockchain, block, or transaction structures. Moreover, our experiments show that the overhead imposed by a redactable blockchain is small compared to the case of an immutable one.", "title": "" }, { "docid": "7d5bcd40c0d5ac30b51c3747e41a4fa6", "text": "We consider the following fundamental communication problem - there is data that is distributed among servers, and the servers want to compute the intersection of their data sets, e.g., the common records in a relational database. They want to do this with as little communication and as few messages (rounds) as possible. They are willing to use randomization, and fail with a tiny probability. Given a protocol for computing the intersection, it can also be used to compute the exact Jaccard similarity, the rarity, the number of distinct elements, and joins between databases. Computing the intersection is at least as hard as the set disjointness problem, which asks whether the intersection is empty. Formally, in the two-server setting, the players hold subsets S, T ⊆ [n]. In many realistic scenarios, the sizes of S and T are significantly smaller than n, so we impose the constraint that |S|, |T| ≤ k. We study the minimum number of bits the parties need to communicate in order to compute the intersection set S ∩ T, given a certain number r of messages that are allowed to be exchanged. While O(k log (n/k)) bits is achieved trivially and deterministically with a single message, we ask what is possible with more than one message and with randomization. We give a smooth communication/round tradeoff which shows that with O(log* k) rounds, O(k) bits of communication is possible, which improves upon the trivial protocol by an order of magnitude. This is in contrast to other basic problems such as computing the union or symmetric difference, for which Ω(k log(n/k)) bits of communication is required for any number of rounds. For two players, known lower bounds for the easier problem of set disjointness imply our algorithms are optimal up to constant factors in communication and number of rounds. We extend our protocols to $m$-player protocols, obtaining an optimal O(mk) bits of communication with a similarly small number of rounds.", "title": "" }, { "docid": "af4700eadf29386c5623097508ab523d", "text": "Starting from the premise that working memory is a system for providing access to representations for complex cognition, six requirements for a working memory system are delineated: (1) maintaining structural representations by dynamic bindings, (2) manipulating structural representations, (3) flexible reconfiguration, (4) partial decoupling from long-term memory, (5) controlled retrieval from long-term memory, and (6) encoding of new structures into longterm memory. The chapter proposes an architecture for a system that meets these requirements. The working memory system consists of a declarative and a procedural part, each of which has three embedded components: the activated part of long-term memory, a component for creating new structural representations by dynamic bindings (the ‘‘region of direct access’’ for declarative working memory, and the ‘‘bridge’’ for procedural working memory), and a mechanism for selecting a single element (‘‘focus of attention’’ for declarative working memory, and ‘‘response focus’’ for procedural working memory). The architecture affords two modes of information processing, an analytical and an associative mode. This distinction provides a theoretically founded formulation of a dual-process theory of reasoning. DOI: https://doi.org/10.1016/S0079-7421(09)51002-X Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-28472 Originally published at: Oberauer, Klaus (2009). Design for a Working Memory. Psychology of Learning and Motivation, 51:45100. DOI: https://doi.org/10.1016/S0079-7421(09)51002-X", "title": "" }, { "docid": "5214faa5f3906819ad56394cf45adc55", "text": "For most applications, the pulse width modulation (PWM) of the primary side switch varies with input line. The switch on-time regulates the output. For a constant output voltage, the volt-seconds applied to the primary will be constant but reset time varies, being relatively long at high line and short at low line. The switch voltage is minimized when the switch off-time is long and fully used for reset", "title": "" }, { "docid": "f33ca4cfba0aab107eb8bd6d3d041b74", "text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "c2fbd25b7ae2d2fdf4be2bc86af5e758", "text": "Accurate representation of soft transitions between image regions is essential for high-quality image editing and compositing. Current techniques for generating such representations depend heavily on interaction by a skilled visual artist, as creating such accurate object selections is a tedious task. In this work, we introduce semantic soft segments, a set of layers that correspond to semantically meaningful regions in an image with accurate soft transitions between different objects. We approach this problem from a spectral segmentation angle and propose a graph structure that embeds texture and color features from the image as well as higher-level semantic information generated by a neural network. The soft segments are generated via eigendecomposition of the carefully constructed Laplacian matrix fully automatically. We demonstrate that otherwise complex image editing tasks can be done with little effort using semantic soft segments.", "title": "" }, { "docid": "c5ecf8e1162d614774c0ce4f6101e002", "text": "The potential for genetically modified (GM) crops to threaten biodiversity conservation and sustainable agriculture is substantial. Megadiverse countries and centers of origin and/or diversity of crop species are particularly vulnerable regions. The future of sustainable agriculture may be irreversibly jeopardized by contamination of in situ preserved genetic resources threatening a strategic resource for the world’s food security. Because GM crops are truly biological novelties, their release into the environment poses concerns about the unpredictable ecological and evolutionary responses that GM species themselves and the interacting biota may express in the medium and long term. One of the consequences of these processes may be a generalized contamination of natural flora by GM traits and a degradation and erosion of the commonly owned genetic resources available today for agricultural development. GM plants carrying pharmaceutical and industrial traits will pose even more dangerous risks if released in the environment.", "title": "" }, { "docid": "98689a2f03193a2fb5cc5195ef735483", "text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.", "title": "" }, { "docid": "f32e8f005d277652fe691216e96e7fd8", "text": "PixelCNN achieves state-of-the-art results in density estimation for natural images. Although training is fast, inference is costly, requiring one network evaluation per pixel; O(N) for N pixels. This can be sped up by caching activations, but still involves generating each pixel sequentially. In this work, we propose a parallelized PixelCNN that allows more efficient inference by modeling certain pixel groups as conditionally independent. Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup O(log N) sampling instead of O(N) enabling the practical generation of 512× 512 images. We evaluate the model on class-conditional image generation, text-toimage synthesis, and action-conditional video generation, showing that our model achieves the best results among non-pixel-autoregressive density models that allow efficient sampling.", "title": "" }, { "docid": "58e0b66d55ca7f5571f4f55d8fcf822c", "text": "Events of various kinds are mentioned and discussed in text documents, whether they are books, news articles, blogs or microblog feeds. The paper starts by giving an overview of how events are treated in linguistics and philosophy. We follow this discussion by surveying how events and associated information are handled in computationally. In particular, we look at how textual documents can be mined to extract events and ancillary information. These days, it is mostly through the application of various machine learning techniques. We also discuss applications of event detection and extraction systems, particularly in summarization, in the medical domain and in the context of Twitter posts. We end the paper with a discussion of challenges and future directions.", "title": "" }, { "docid": "1d676f4631d739d1c37c18eb9fb23248", "text": "We present an approach to non-factoid answer selection with a separate component based on BiLSTM to determine the importance of segments in the input. In contrast to other recently proposed attention-based models within the same area, we determine the importance while assuming the independence of questions and candidate answers. Experimental results show the effectiveness of our approach, which outperforms several state-of-the-art attention-based models on the recent non-factoid answer selection datasets InsuranceQA v1 and v2. We show that it is possible to perform effective importance weighting for answer selection without relying on the relatedness of questions and answers. The source code of our experiments is publicly available.1", "title": "" }, { "docid": "5a314adde06c0d91142d2dbd7f57102b", "text": "After the technologies of integrated circuits, personal computers, and the Internet, Internet of Things (IoT) is the latest information technology (IT) that is radically changing business paradigms. However, IoT's influence in the manufacturing sector has yet been fully explored. On the other hand, existing computer-aided software tools are experiencing a bottleneck in dealing with complexity, dynamics, and uncertainties in their applications of modern enterprises. It is argued that the adoption of IoT and cloud computing in enterprise systems (ESs) would overcome the bottleneck. In this paper, the challenges in generating assembly plans of complex products are discussed. IoT and cloud computing are proposed to help a conventional assembly modeling system evolve into an advanced system, which is capable to deal with complexity and changes automatically. To achieve this goal, an assembly modeling system is automated, and the proposed system includes the following innovations: 1) the modularized architecture to make the system robust, reliable, flexible, and expandable; 2) the integrated object-oriented templates to facilitate interfaces and reuses of system components; and 3) the automated algorithms to retrieve relational assembly matrices for assembly planning. Assembly modeling for aircraft engines is used as examples to illustrate the system effectiveness.", "title": "" }, { "docid": "734840224154ef88cdb196671fd3f3f8", "text": "Tiny face detection aims to find faces with high degrees of variability in scale, resolution and occlusion in cluttered scenes. Due to the very little information available on tiny faces, it is not sufficient to detect them merely based on the information presented inside the tiny bounding boxes or their context. In this paper, we propose to exploit the semantic similarity among all predicted targets in each image to boost current face detectors. To this end, we present a novel framework to model semantic similarity as pairwise constraints within the metric learning scheme, and then refine our predictions with the semantic similarity by utilizing the graph cut techniques. Experiments conducted on three widely-used benchmark datasets have demonstrated the improvement over the-state-of-the-arts gained by applying this idea.", "title": "" }, { "docid": "d6ed97c07d19545de707733ac2fbe38e", "text": "We present an approach for tracking camera pose in real time given a stream of depth images. Existing algorithms are prone to drift in the presence of smooth surfaces that destabilize geometric alignment. We show that useful contour cues can be extracted from noisy and incomplete depth input. These cues are used to establish correspondence constraints that carry information about scene geometry and constrain pose estimation. Despite ambiguities in the input, the presented contour constraints reliably improve tracking accuracy. Results on benchmark sequences and on additional challenging examples demonstrate the utility of contour cues for real-time camera pose estimation.", "title": "" }, { "docid": "30731e817fb1c04f853caf1dd7a30418", "text": "This paper focuses on morphological analysis of Bangla words to incorporate them into Bangla to universal networking language (UNL) processors. Researchers have been working on morphological structure of Bangla for machine translation and a considerable volume of work is available. So far, no attempt has been made to integrate the works for a concrete computational output. In this paper we particularly emphasize on bringing previous works on morphological analysis in the framework of UNL, with the goal to produce a Bangla-UNL dictionary, as UNL structures can provide, for any morphological analysis, a unified base to fit into already developed universal conversion systems of UNL. We explain the morphological rules of Bangla words for UNL structures. These rules tend to expose the modifications of parts of speech with regards to tense, person, subject etc. of the words of a sentence. Here we outline the morphology of nouns, verbs and adjective phrases only.", "title": "" }, { "docid": "5cfef434d0d33ac5859bcdb77227d7b7", "text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.", "title": "" }, { "docid": "d9591f6e41a781da427aefd37880a0a1", "text": "For low microwave band, the size of substrate integrated waveguide (SIW) couplers still need to be further reduced though other advantages of low profile, low insertion loss, low cost etc. are very remarkable. In this letter, novel substrate integrated folded waveguide (SIFW) narrow-wall single-slot and double-slot directional couplers are designed and implemented. The measured results are in good agreement with simulations. A corresponding SIW coupler is also designed for comparison. The SIFW coupler keeps almost all the advantages of the SIW coupler with nearly a half reduction in size.", "title": "" }, { "docid": "8fc05d9e26c0aa98ffafe896d8c5a01b", "text": "We describe our clinical question answering system implemented for the Text Retrieval Conference (TREC 2016) Clinical Decision Support (CDS) track. We submitted five runs using a combination of knowledge-driven (based on a curated knowledge graph) and deep learning-based (using key-value memory networks) approaches to retrieve relevant biomedical articles for answering generic clinical questions (diagnoses, treatment, and test) for each clinical scenario provided in three forms: notes, descriptions, and summaries. The submitted runs were varied based on the use of notes, descriptions, or summaries in association with different diagnostic inferencing methodologies applied prior to biomedical article retrieval. Evaluation results demonstrate that our systems achieved best or close to best scores for 20% of the topics and better than median scores for 40% of the topics across all participants considering all evaluation measures. Further analysis shows that on average our clinical question answering system performed best with summaries using diagnostic inferencing from the knowledge graph whereas our key-value memory network model with notes consistently outperformed the knowledge graph-based system for notes and descriptions. ∗The author is also affiliated with Worcester Polytechnic Institute ([email protected]). †The author is also affiliated with Northwestern University ([email protected]). ‡The author is also affiliated with Brandeis University ([email protected]).", "title": "" }, { "docid": "7df97d3a5c393053b22255a0414e574a", "text": "Let G be a directed graph containing n vertices, one of which is a distinguished source s, and m edges, each with a non-negative cost. We consider the problem of finding, for each possible sink vertex u , a pair of edge-disjoint paths from s to u of minimum total edge cost. Suurballe has given an O(n2 1ogn)-time algorithm for this problem. We give an implementation of Suurballe’s algorithm that runs in O(m log(, +,+)n) time and O(m) space. Our algorithm builds an implicit representation of the n pairs of paths; given this representation, the time necessary to explicitly construct the pair of paths for any given sink is O(1) per edge on the paths.", "title": "" } ]
scidocsrr